00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 594 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3260 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.186 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.279 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.279 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.461 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.474 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.486 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.486 > git config core.sparsecheckout # timeout=10 00:00:06.499 > git read-tree -mu HEAD # timeout=10 00:00:06.517 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:06.540 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:06.541 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.670 [Pipeline] Start of Pipeline 00:00:06.688 [Pipeline] library 00:00:06.690 Loading library shm_lib@master 00:00:06.690 Library shm_lib@master is cached. Copying from home. 00:00:06.706 [Pipeline] node 00:00:21.708 Still waiting to schedule task 00:00:21.709 Waiting for next available executor on ‘vagrant-vm-host’ 00:12:01.428 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:01.430 [Pipeline] { 00:12:01.446 [Pipeline] catchError 00:12:01.448 [Pipeline] { 00:12:01.465 [Pipeline] wrap 00:12:01.477 [Pipeline] { 00:12:01.488 [Pipeline] stage 00:12:01.490 [Pipeline] { (Prologue) 00:12:01.513 [Pipeline] echo 00:12:01.515 Node: VM-host-SM16 00:12:01.521 [Pipeline] cleanWs 00:12:01.530 [WS-CLEANUP] Deleting project workspace... 00:12:01.530 [WS-CLEANUP] Deferred wipeout is used... 00:12:01.537 [WS-CLEANUP] done 00:12:01.751 [Pipeline] setCustomBuildProperty 00:12:01.817 [Pipeline] httpRequest 00:12:01.834 [Pipeline] echo 00:12:01.836 Sorcerer 10.211.164.101 is alive 00:12:01.844 [Pipeline] httpRequest 00:12:01.848 HttpMethod: GET 00:12:01.848 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:12:01.848 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:12:01.850 Response Code: HTTP/1.1 200 OK 00:12:01.850 Success: Status code 200 is in the accepted range: 200,404 00:12:01.850 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:12:01.994 [Pipeline] sh 00:12:02.272 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:12:02.288 [Pipeline] httpRequest 00:12:02.306 [Pipeline] echo 00:12:02.308 Sorcerer 10.211.164.101 is alive 00:12:02.317 [Pipeline] httpRequest 00:12:02.321 HttpMethod: GET 00:12:02.321 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:12:02.322 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:12:02.322 Response Code: HTTP/1.1 200 OK 00:12:02.323 Success: Status code 200 is in the accepted range: 200,404 00:12:02.323 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:12:04.799 [Pipeline] sh 00:12:05.075 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:12:08.365 [Pipeline] sh 00:12:08.642 + git -C spdk log --oneline -n5 00:12:08.642 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:12:08.642 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:12:08.642 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:12:08.642 e03c164a1 nvme: add nvme_ctrlr_lock 00:12:08.642 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:12:08.663 [Pipeline] withCredentials 00:12:08.673 > git --version # timeout=10 00:12:08.685 > git --version # 'git version 2.39.2' 00:12:08.697 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:12:08.699 [Pipeline] { 00:12:08.707 [Pipeline] retry 00:12:08.709 [Pipeline] { 00:12:08.722 [Pipeline] sh 00:12:08.993 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:12:09.004 [Pipeline] } 00:12:09.026 [Pipeline] // retry 00:12:09.032 [Pipeline] } 00:12:09.054 [Pipeline] // withCredentials 00:12:09.066 [Pipeline] httpRequest 00:12:09.083 [Pipeline] echo 00:12:09.084 Sorcerer 10.211.164.101 is alive 00:12:09.093 [Pipeline] httpRequest 00:12:09.098 HttpMethod: GET 00:12:09.098 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:12:09.099 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:12:09.099 Response Code: HTTP/1.1 200 OK 00:12:09.099 Success: Status code 200 is in the accepted range: 200,404 00:12:09.100 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:12:10.330 [Pipeline] sh 00:12:10.608 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:12:12.527 [Pipeline] sh 00:12:12.806 + git -C dpdk log --oneline -n5 00:12:12.806 eeb0605f11 version: 23.11.0 00:12:12.806 238778122a doc: update release notes for 23.11 00:12:12.806 46aa6b3cfc doc: fix description of RSS features 00:12:12.806 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:12:12.806 7e421ae345 devtools: support skipping forbid rule check 00:12:12.825 [Pipeline] writeFile 00:12:12.841 [Pipeline] sh 00:12:13.120 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:13.134 [Pipeline] sh 00:12:13.417 + cat autorun-spdk.conf 00:12:13.417 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:13.417 SPDK_TEST_NVMF=1 00:12:13.417 SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:13.417 SPDK_TEST_URING=1 00:12:13.417 SPDK_TEST_USDT=1 00:12:13.417 SPDK_RUN_UBSAN=1 00:12:13.417 NET_TYPE=virt 00:12:13.417 SPDK_TEST_NATIVE_DPDK=v23.11 00:12:13.417 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:12:13.417 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:13.424 RUN_NIGHTLY=1 00:12:13.426 [Pipeline] } 00:12:13.445 [Pipeline] // stage 00:12:13.463 [Pipeline] stage 00:12:13.465 [Pipeline] { (Run VM) 00:12:13.480 [Pipeline] sh 00:12:13.759 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:13.759 + echo 'Start stage prepare_nvme.sh' 00:12:13.759 Start stage prepare_nvme.sh 00:12:13.759 + [[ -n 5 ]] 00:12:13.759 + disk_prefix=ex5 00:12:13.759 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:12:13.759 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:12:13.759 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:12:13.759 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:13.759 ++ SPDK_TEST_NVMF=1 00:12:13.759 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:13.759 ++ SPDK_TEST_URING=1 00:12:13.759 ++ SPDK_TEST_USDT=1 00:12:13.759 ++ SPDK_RUN_UBSAN=1 00:12:13.759 ++ NET_TYPE=virt 00:12:13.759 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:12:13.759 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:12:13.759 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:13.759 ++ RUN_NIGHTLY=1 00:12:13.759 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:13.759 + nvme_files=() 00:12:13.759 + declare -A nvme_files 00:12:13.759 + backend_dir=/var/lib/libvirt/images/backends 00:12:13.759 + nvme_files['nvme.img']=5G 00:12:13.759 + nvme_files['nvme-cmb.img']=5G 00:12:13.759 + nvme_files['nvme-multi0.img']=4G 00:12:13.759 + nvme_files['nvme-multi1.img']=4G 00:12:13.759 + nvme_files['nvme-multi2.img']=4G 00:12:13.759 + nvme_files['nvme-openstack.img']=8G 00:12:13.759 + nvme_files['nvme-zns.img']=5G 00:12:13.759 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:13.759 + (( SPDK_TEST_FTL == 1 )) 00:12:13.759 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:13.759 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:13.759 + for nvme in "${!nvme_files[@]}" 00:12:13.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:12:13.759 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:13.759 + for nvme in "${!nvme_files[@]}" 00:12:13.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:12:13.759 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:13.759 + for nvme in "${!nvme_files[@]}" 00:12:13.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:12:13.759 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:13.759 + for nvme in "${!nvme_files[@]}" 00:12:13.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:12:13.759 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:13.759 + for nvme in "${!nvme_files[@]}" 00:12:13.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:12:13.759 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:13.759 + for nvme in "${!nvme_files[@]}" 00:12:13.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:12:13.759 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:13.759 + for nvme in "${!nvme_files[@]}" 00:12:13.759 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:12:13.759 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:13.759 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:12:13.759 + echo 'End stage prepare_nvme.sh' 00:12:13.759 End stage prepare_nvme.sh 00:12:13.772 [Pipeline] sh 00:12:14.053 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:14.053 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:12:14.053 00:12:14.053 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:12:14.053 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:12:14.053 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:14.053 HELP=0 00:12:14.053 DRY_RUN=0 00:12:14.053 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:12:14.053 NVME_DISKS_TYPE=nvme,nvme, 00:12:14.053 NVME_AUTO_CREATE=0 00:12:14.053 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:12:14.053 NVME_CMB=,, 00:12:14.053 NVME_PMR=,, 00:12:14.053 NVME_ZNS=,, 00:12:14.053 NVME_MS=,, 00:12:14.053 NVME_FDP=,, 00:12:14.053 SPDK_VAGRANT_DISTRO=fedora38 00:12:14.053 SPDK_VAGRANT_VMCPU=10 00:12:14.053 SPDK_VAGRANT_VMRAM=12288 00:12:14.053 SPDK_VAGRANT_PROVIDER=libvirt 00:12:14.053 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:12:14.053 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:14.053 SPDK_OPENSTACK_NETWORK=0 00:12:14.053 VAGRANT_PACKAGE_BOX=0 00:12:14.053 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:12:14.053 FORCE_DISTRO=true 00:12:14.053 VAGRANT_BOX_VERSION= 00:12:14.053 EXTRA_VAGRANTFILES= 00:12:14.053 NIC_MODEL=e1000 00:12:14.053 00:12:14.053 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:12:14.053 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:12:17.350 Bringing machine 'default' up with 'libvirt' provider... 00:12:17.917 ==> default: Creating image (snapshot of base box volume). 00:12:18.175 ==> default: Creating domain with the following settings... 00:12:18.175 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720733138_c9b8ae037efa3b73b356 00:12:18.175 ==> default: -- Domain type: kvm 00:12:18.175 ==> default: -- Cpus: 10 00:12:18.175 ==> default: -- Feature: acpi 00:12:18.175 ==> default: -- Feature: apic 00:12:18.175 ==> default: -- Feature: pae 00:12:18.175 ==> default: -- Memory: 12288M 00:12:18.175 ==> default: -- Memory Backing: hugepages: 00:12:18.175 ==> default: -- Management MAC: 00:12:18.175 ==> default: -- Loader: 00:12:18.175 ==> default: -- Nvram: 00:12:18.175 ==> default: -- Base box: spdk/fedora38 00:12:18.175 ==> default: -- Storage pool: default 00:12:18.175 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720733138_c9b8ae037efa3b73b356.img (20G) 00:12:18.175 ==> default: -- Volume Cache: default 00:12:18.175 ==> default: -- Kernel: 00:12:18.175 ==> default: -- Initrd: 00:12:18.175 ==> default: -- Graphics Type: vnc 00:12:18.175 ==> default: -- Graphics Port: -1 00:12:18.175 ==> default: -- Graphics IP: 127.0.0.1 00:12:18.175 ==> default: -- Graphics Password: Not defined 00:12:18.175 ==> default: -- Video Type: cirrus 00:12:18.175 ==> default: -- Video VRAM: 9216 00:12:18.175 ==> default: -- Sound Type: 00:12:18.175 ==> default: -- Keymap: en-us 00:12:18.175 ==> default: -- TPM Path: 00:12:18.175 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:18.175 ==> default: -- Command line args: 00:12:18.175 ==> default: -> value=-device, 00:12:18.175 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:12:18.175 ==> default: -> value=-drive, 00:12:18.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:12:18.175 ==> default: -> value=-device, 00:12:18.175 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:18.175 ==> default: -> value=-device, 00:12:18.175 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:12:18.175 ==> default: -> value=-drive, 00:12:18.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:12:18.175 ==> default: -> value=-device, 00:12:18.175 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:18.175 ==> default: -> value=-drive, 00:12:18.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:12:18.175 ==> default: -> value=-device, 00:12:18.175 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:18.176 ==> default: -> value=-drive, 00:12:18.176 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:12:18.176 ==> default: -> value=-device, 00:12:18.176 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:18.176 ==> default: Creating shared folders metadata... 00:12:18.176 ==> default: Starting domain. 00:12:20.708 ==> default: Waiting for domain to get an IP address... 00:12:38.788 ==> default: Waiting for SSH to become available... 00:12:38.788 ==> default: Configuring and enabling network interfaces... 00:12:42.976 default: SSH address: 192.168.121.18:22 00:12:42.976 default: SSH username: vagrant 00:12:42.976 default: SSH auth method: private key 00:12:44.976 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:12:51.534 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:12:58.091 ==> default: Mounting SSHFS shared folder... 00:12:59.461 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:12:59.461 ==> default: Checking Mount.. 00:13:00.395 ==> default: Folder Successfully Mounted! 00:13:00.654 ==> default: Running provisioner: file... 00:13:01.226 default: ~/.gitconfig => .gitconfig 00:13:01.790 00:13:01.790 SUCCESS! 00:13:01.790 00:13:01.790 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:13:01.790 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:13:01.790 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:13:01.790 00:13:01.800 [Pipeline] } 00:13:01.820 [Pipeline] // stage 00:13:01.830 [Pipeline] dir 00:13:01.831 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:13:01.833 [Pipeline] { 00:13:01.849 [Pipeline] catchError 00:13:01.851 [Pipeline] { 00:13:01.865 [Pipeline] sh 00:13:02.140 + vagrant ssh-config --host vagrant 00:13:02.140 + sed -ne /^Host/,$p 00:13:02.140 + tee ssh_conf 00:13:06.323 Host vagrant 00:13:06.323 HostName 192.168.121.18 00:13:06.323 User vagrant 00:13:06.323 Port 22 00:13:06.323 UserKnownHostsFile /dev/null 00:13:06.323 StrictHostKeyChecking no 00:13:06.323 PasswordAuthentication no 00:13:06.323 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:13:06.323 IdentitiesOnly yes 00:13:06.323 LogLevel FATAL 00:13:06.323 ForwardAgent yes 00:13:06.323 ForwardX11 yes 00:13:06.323 00:13:06.337 [Pipeline] withEnv 00:13:06.340 [Pipeline] { 00:13:06.355 [Pipeline] sh 00:13:06.632 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:13:06.632 source /etc/os-release 00:13:06.632 [[ -e /image.version ]] && img=$(< /image.version) 00:13:06.632 # Minimal, systemd-like check. 00:13:06.632 if [[ -e /.dockerenv ]]; then 00:13:06.632 # Clear garbage from the node's name: 00:13:06.632 # agt-er_autotest_547-896 -> autotest_547-896 00:13:06.632 # $HOSTNAME is the actual container id 00:13:06.632 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:13:06.632 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:13:06.632 # We can assume this is a mount from a host where container is running, 00:13:06.632 # so fetch its hostname to easily identify the target swarm worker. 00:13:06.632 container="$(< /etc/hostname) ($agent)" 00:13:06.632 else 00:13:06.632 # Fallback 00:13:06.632 container=$agent 00:13:06.632 fi 00:13:06.632 fi 00:13:06.632 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:13:06.632 00:13:06.997 [Pipeline] } 00:13:07.018 [Pipeline] // withEnv 00:13:07.027 [Pipeline] setCustomBuildProperty 00:13:07.043 [Pipeline] stage 00:13:07.045 [Pipeline] { (Tests) 00:13:07.070 [Pipeline] sh 00:13:07.350 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:13:07.622 [Pipeline] sh 00:13:07.902 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:13:08.176 [Pipeline] timeout 00:13:08.176 Timeout set to expire in 30 min 00:13:08.178 [Pipeline] { 00:13:08.196 [Pipeline] sh 00:13:08.474 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:13:09.040 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:13:09.056 [Pipeline] sh 00:13:09.339 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:13:09.609 [Pipeline] sh 00:13:09.886 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:09.906 [Pipeline] sh 00:13:10.186 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:13:10.451 ++ readlink -f spdk_repo 00:13:10.451 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:10.451 + [[ -n /home/vagrant/spdk_repo ]] 00:13:10.451 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:10.451 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:10.451 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:10.451 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:10.451 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:10.451 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:13:10.451 + cd /home/vagrant/spdk_repo 00:13:10.451 + source /etc/os-release 00:13:10.451 ++ NAME='Fedora Linux' 00:13:10.451 ++ VERSION='38 (Cloud Edition)' 00:13:10.451 ++ ID=fedora 00:13:10.451 ++ VERSION_ID=38 00:13:10.451 ++ VERSION_CODENAME= 00:13:10.451 ++ PLATFORM_ID=platform:f38 00:13:10.451 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:13:10.451 ++ ANSI_COLOR='0;38;2;60;110;180' 00:13:10.451 ++ LOGO=fedora-logo-icon 00:13:10.451 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:13:10.451 ++ HOME_URL=https://fedoraproject.org/ 00:13:10.451 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:13:10.451 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:13:10.451 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:13:10.451 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:13:10.451 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:13:10.451 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:13:10.451 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:13:10.451 ++ SUPPORT_END=2024-05-14 00:13:10.451 ++ VARIANT='Cloud Edition' 00:13:10.451 ++ VARIANT_ID=cloud 00:13:10.451 + uname -a 00:13:10.451 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:13:10.451 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:10.451 Hugepages 00:13:10.451 node hugesize free / total 00:13:10.451 node0 1048576kB 0 / 0 00:13:10.451 node0 2048kB 0 / 0 00:13:10.451 00:13:10.451 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:10.451 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:10.451 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:13:10.451 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:13:10.451 + rm -f /tmp/spdk-ld-path 00:13:10.451 + source autorun-spdk.conf 00:13:10.451 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:10.451 ++ SPDK_TEST_NVMF=1 00:13:10.451 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:10.451 ++ SPDK_TEST_URING=1 00:13:10.451 ++ SPDK_TEST_USDT=1 00:13:10.451 ++ SPDK_RUN_UBSAN=1 00:13:10.451 ++ NET_TYPE=virt 00:13:10.451 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:13:10.452 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:13:10.452 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:10.452 ++ RUN_NIGHTLY=1 00:13:10.452 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:10.452 + [[ -n '' ]] 00:13:10.452 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:10.452 + for M in /var/spdk/build-*-manifest.txt 00:13:10.452 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:10.452 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:10.720 + for M in /var/spdk/build-*-manifest.txt 00:13:10.720 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:10.720 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:10.720 ++ uname 00:13:10.720 + [[ Linux == \L\i\n\u\x ]] 00:13:10.720 + sudo dmesg -T 00:13:10.720 + sudo dmesg --clear 00:13:10.720 + dmesg_pid=5984 00:13:10.720 + [[ Fedora Linux == FreeBSD ]] 00:13:10.720 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:10.720 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:10.720 + sudo dmesg -Tw 00:13:10.720 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:10.720 + [[ -x /usr/src/fio-static/fio ]] 00:13:10.720 + export FIO_BIN=/usr/src/fio-static/fio 00:13:10.720 + FIO_BIN=/usr/src/fio-static/fio 00:13:10.720 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:10.720 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:10.720 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:10.720 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:10.720 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:10.720 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:10.720 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:10.720 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:10.720 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:10.720 Test configuration: 00:13:10.720 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:10.720 SPDK_TEST_NVMF=1 00:13:10.720 SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:10.720 SPDK_TEST_URING=1 00:13:10.720 SPDK_TEST_USDT=1 00:13:10.720 SPDK_RUN_UBSAN=1 00:13:10.720 NET_TYPE=virt 00:13:10.720 SPDK_TEST_NATIVE_DPDK=v23.11 00:13:10.720 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:13:10.720 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:10.720 RUN_NIGHTLY=1 21:26:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.720 21:26:31 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:10.720 21:26:31 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.720 21:26:31 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.720 21:26:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.720 21:26:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.720 21:26:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.720 21:26:31 -- paths/export.sh@5 -- $ export PATH 00:13:10.720 21:26:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.720 21:26:31 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:10.720 21:26:31 -- common/autobuild_common.sh@435 -- $ date +%s 00:13:10.720 21:26:31 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720733191.XXXXXX 00:13:10.720 21:26:31 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720733191.LePo1X 00:13:10.720 21:26:31 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:13:10.720 21:26:31 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:13:10.720 21:26:31 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:13:10.720 21:26:31 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:13:10.720 21:26:31 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:10.720 21:26:31 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:10.720 21:26:31 -- common/autobuild_common.sh@451 -- $ get_config_params 00:13:10.720 21:26:31 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:13:10.720 21:26:31 -- common/autotest_common.sh@10 -- $ set +x 00:13:10.720 21:26:31 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:13:10.720 21:26:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:10.720 21:26:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:10.720 21:26:31 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:10.720 21:26:31 -- spdk/autobuild.sh@16 -- $ date -u 00:13:10.720 Thu Jul 11 09:26:31 PM UTC 2024 00:13:10.720 21:26:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:10.720 LTS-59-g4b94202c6 00:13:10.720 21:26:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:13:10.720 21:26:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:10.720 21:26:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:10.720 21:26:31 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:13:10.720 21:26:31 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:13:10.720 21:26:31 -- common/autotest_common.sh@10 -- $ set +x 00:13:10.720 ************************************ 00:13:10.720 START TEST ubsan 00:13:10.720 ************************************ 00:13:10.720 using ubsan 00:13:10.720 21:26:31 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:13:10.720 00:13:10.720 real 0m0.000s 00:13:10.720 user 0m0.000s 00:13:10.720 sys 0m0.000s 00:13:10.720 21:26:31 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:13:10.720 ************************************ 00:13:10.720 END TEST ubsan 00:13:10.720 ************************************ 00:13:10.720 21:26:31 -- common/autotest_common.sh@10 -- $ set +x 00:13:10.720 21:26:31 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:13:10.720 21:26:31 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:13:10.720 21:26:31 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:13:10.720 21:26:31 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:13:10.720 21:26:31 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:13:10.720 21:26:31 -- common/autotest_common.sh@10 -- $ set +x 00:13:10.720 ************************************ 00:13:10.720 START TEST build_native_dpdk 00:13:10.720 ************************************ 00:13:10.720 21:26:31 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:13:10.720 21:26:31 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:13:10.720 21:26:31 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:13:10.720 21:26:31 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:13:10.720 21:26:31 -- common/autobuild_common.sh@51 -- $ local compiler 00:13:10.720 21:26:31 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:13:10.720 21:26:31 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:13:10.720 21:26:31 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:13:10.720 21:26:31 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:13:10.720 21:26:31 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:13:10.720 21:26:31 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:13:10.720 21:26:31 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:13:10.720 21:26:31 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:13:10.980 21:26:31 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:13:10.980 21:26:31 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:13:10.980 21:26:31 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:13:10.980 21:26:31 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:13:10.980 21:26:31 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:13:10.980 21:26:31 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:13:10.980 21:26:31 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:13:10.980 eeb0605f11 version: 23.11.0 00:13:10.980 238778122a doc: update release notes for 23.11 00:13:10.980 46aa6b3cfc doc: fix description of RSS features 00:13:10.980 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:13:10.980 7e421ae345 devtools: support skipping forbid rule check 00:13:10.980 21:26:31 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:13:10.980 21:26:31 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:13:10.980 21:26:31 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:13:10.980 21:26:31 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:13:10.980 21:26:31 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:13:10.980 21:26:31 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:13:10.980 21:26:31 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:13:10.980 21:26:31 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:13:10.980 21:26:31 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:13:10.980 21:26:31 -- common/autobuild_common.sh@168 -- $ uname -s 00:13:10.980 21:26:31 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:13:10.980 21:26:31 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:13:10.980 21:26:31 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:13:10.980 21:26:31 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:13:10.980 21:26:31 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:13:10.980 21:26:31 -- scripts/common.sh@335 -- $ IFS=.-: 00:13:10.980 21:26:31 -- scripts/common.sh@335 -- $ read -ra ver1 00:13:10.980 21:26:31 -- scripts/common.sh@336 -- $ IFS=.-: 00:13:10.980 21:26:31 -- scripts/common.sh@336 -- $ read -ra ver2 00:13:10.980 21:26:31 -- scripts/common.sh@337 -- $ local 'op=<' 00:13:10.980 21:26:31 -- scripts/common.sh@339 -- $ ver1_l=3 00:13:10.980 21:26:31 -- scripts/common.sh@340 -- $ ver2_l=3 00:13:10.980 21:26:31 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:13:10.980 21:26:31 -- scripts/common.sh@343 -- $ case "$op" in 00:13:10.980 21:26:31 -- scripts/common.sh@344 -- $ : 1 00:13:10.980 21:26:31 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:13:10.980 21:26:31 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.980 21:26:31 -- scripts/common.sh@364 -- $ decimal 23 00:13:10.980 21:26:31 -- scripts/common.sh@352 -- $ local d=23 00:13:10.980 21:26:31 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:13:10.980 21:26:31 -- scripts/common.sh@354 -- $ echo 23 00:13:10.980 21:26:31 -- scripts/common.sh@364 -- $ ver1[v]=23 00:13:10.980 21:26:31 -- scripts/common.sh@365 -- $ decimal 21 00:13:10.980 21:26:31 -- scripts/common.sh@352 -- $ local d=21 00:13:10.980 21:26:31 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:13:10.980 21:26:31 -- scripts/common.sh@354 -- $ echo 21 00:13:10.980 21:26:31 -- scripts/common.sh@365 -- $ ver2[v]=21 00:13:10.980 21:26:31 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:13:10.980 21:26:31 -- scripts/common.sh@366 -- $ return 1 00:13:10.980 21:26:31 -- common/autobuild_common.sh@173 -- $ patch -p1 00:13:10.980 patching file config/rte_config.h 00:13:10.980 Hunk #1 succeeded at 60 (offset 1 line). 00:13:10.980 21:26:31 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:13:10.980 21:26:31 -- common/autobuild_common.sh@178 -- $ uname -s 00:13:10.980 21:26:31 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:13:10.980 21:26:31 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:13:10.980 21:26:31 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:13:16.249 The Meson build system 00:13:16.249 Version: 1.3.1 00:13:16.249 Source dir: /home/vagrant/spdk_repo/dpdk 00:13:16.249 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:13:16.249 Build type: native build 00:13:16.249 Program cat found: YES (/usr/bin/cat) 00:13:16.249 Project name: DPDK 00:13:16.249 Project version: 23.11.0 00:13:16.249 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:13:16.249 C linker for the host machine: gcc ld.bfd 2.39-16 00:13:16.249 Host machine cpu family: x86_64 00:13:16.249 Host machine cpu: x86_64 00:13:16.249 Message: ## Building in Developer Mode ## 00:13:16.249 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:16.249 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:13:16.249 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:13:16.249 Program python3 found: YES (/usr/bin/python3) 00:13:16.249 Program cat found: YES (/usr/bin/cat) 00:13:16.249 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:13:16.249 Compiler for C supports arguments -march=native: YES 00:13:16.249 Checking for size of "void *" : 8 00:13:16.249 Checking for size of "void *" : 8 (cached) 00:13:16.249 Library m found: YES 00:13:16.249 Library numa found: YES 00:13:16.249 Has header "numaif.h" : YES 00:13:16.249 Library fdt found: NO 00:13:16.249 Library execinfo found: NO 00:13:16.249 Has header "execinfo.h" : YES 00:13:16.249 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:13:16.249 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:16.249 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:16.249 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:16.249 Run-time dependency openssl found: YES 3.0.9 00:13:16.249 Run-time dependency libpcap found: YES 1.10.4 00:13:16.249 Has header "pcap.h" with dependency libpcap: YES 00:13:16.249 Compiler for C supports arguments -Wcast-qual: YES 00:13:16.249 Compiler for C supports arguments -Wdeprecated: YES 00:13:16.249 Compiler for C supports arguments -Wformat: YES 00:13:16.249 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:16.249 Compiler for C supports arguments -Wformat-security: NO 00:13:16.249 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:16.249 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:16.249 Compiler for C supports arguments -Wnested-externs: YES 00:13:16.249 Compiler for C supports arguments -Wold-style-definition: YES 00:13:16.249 Compiler for C supports arguments -Wpointer-arith: YES 00:13:16.249 Compiler for C supports arguments -Wsign-compare: YES 00:13:16.249 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:16.249 Compiler for C supports arguments -Wundef: YES 00:13:16.249 Compiler for C supports arguments -Wwrite-strings: YES 00:13:16.249 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:16.249 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:16.249 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:16.249 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:16.249 Program objdump found: YES (/usr/bin/objdump) 00:13:16.249 Compiler for C supports arguments -mavx512f: YES 00:13:16.249 Checking if "AVX512 checking" compiles: YES 00:13:16.249 Fetching value of define "__SSE4_2__" : 1 00:13:16.249 Fetching value of define "__AES__" : 1 00:13:16.249 Fetching value of define "__AVX__" : 1 00:13:16.249 Fetching value of define "__AVX2__" : 1 00:13:16.249 Fetching value of define "__AVX512BW__" : (undefined) 00:13:16.249 Fetching value of define "__AVX512CD__" : (undefined) 00:13:16.249 Fetching value of define "__AVX512DQ__" : (undefined) 00:13:16.249 Fetching value of define "__AVX512F__" : (undefined) 00:13:16.249 Fetching value of define "__AVX512VL__" : (undefined) 00:13:16.249 Fetching value of define "__PCLMUL__" : 1 00:13:16.249 Fetching value of define "__RDRND__" : 1 00:13:16.249 Fetching value of define "__RDSEED__" : 1 00:13:16.249 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:13:16.249 Fetching value of define "__znver1__" : (undefined) 00:13:16.249 Fetching value of define "__znver2__" : (undefined) 00:13:16.249 Fetching value of define "__znver3__" : (undefined) 00:13:16.249 Fetching value of define "__znver4__" : (undefined) 00:13:16.249 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:16.249 Message: lib/log: Defining dependency "log" 00:13:16.249 Message: lib/kvargs: Defining dependency "kvargs" 00:13:16.249 Message: lib/telemetry: Defining dependency "telemetry" 00:13:16.249 Checking for function "getentropy" : NO 00:13:16.249 Message: lib/eal: Defining dependency "eal" 00:13:16.249 Message: lib/ring: Defining dependency "ring" 00:13:16.249 Message: lib/rcu: Defining dependency "rcu" 00:13:16.249 Message: lib/mempool: Defining dependency "mempool" 00:13:16.249 Message: lib/mbuf: Defining dependency "mbuf" 00:13:16.249 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:16.249 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:16.249 Compiler for C supports arguments -mpclmul: YES 00:13:16.249 Compiler for C supports arguments -maes: YES 00:13:16.249 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:16.249 Compiler for C supports arguments -mavx512bw: YES 00:13:16.249 Compiler for C supports arguments -mavx512dq: YES 00:13:16.249 Compiler for C supports arguments -mavx512vl: YES 00:13:16.249 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:16.249 Compiler for C supports arguments -mavx2: YES 00:13:16.249 Compiler for C supports arguments -mavx: YES 00:13:16.249 Message: lib/net: Defining dependency "net" 00:13:16.249 Message: lib/meter: Defining dependency "meter" 00:13:16.249 Message: lib/ethdev: Defining dependency "ethdev" 00:13:16.249 Message: lib/pci: Defining dependency "pci" 00:13:16.249 Message: lib/cmdline: Defining dependency "cmdline" 00:13:16.249 Message: lib/metrics: Defining dependency "metrics" 00:13:16.249 Message: lib/hash: Defining dependency "hash" 00:13:16.249 Message: lib/timer: Defining dependency "timer" 00:13:16.249 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:16.249 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:13:16.249 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:13:16.249 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:13:16.249 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:13:16.249 Message: lib/acl: Defining dependency "acl" 00:13:16.249 Message: lib/bbdev: Defining dependency "bbdev" 00:13:16.249 Message: lib/bitratestats: Defining dependency "bitratestats" 00:13:16.249 Run-time dependency libelf found: YES 0.190 00:13:16.249 Message: lib/bpf: Defining dependency "bpf" 00:13:16.249 Message: lib/cfgfile: Defining dependency "cfgfile" 00:13:16.249 Message: lib/compressdev: Defining dependency "compressdev" 00:13:16.249 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:16.249 Message: lib/distributor: Defining dependency "distributor" 00:13:16.249 Message: lib/dmadev: Defining dependency "dmadev" 00:13:16.249 Message: lib/efd: Defining dependency "efd" 00:13:16.249 Message: lib/eventdev: Defining dependency "eventdev" 00:13:16.249 Message: lib/dispatcher: Defining dependency "dispatcher" 00:13:16.249 Message: lib/gpudev: Defining dependency "gpudev" 00:13:16.249 Message: lib/gro: Defining dependency "gro" 00:13:16.249 Message: lib/gso: Defining dependency "gso" 00:13:16.249 Message: lib/ip_frag: Defining dependency "ip_frag" 00:13:16.249 Message: lib/jobstats: Defining dependency "jobstats" 00:13:16.249 Message: lib/latencystats: Defining dependency "latencystats" 00:13:16.249 Message: lib/lpm: Defining dependency "lpm" 00:13:16.249 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:16.249 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:13:16.249 Fetching value of define "__AVX512IFMA__" : (undefined) 00:13:16.249 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:13:16.249 Message: lib/member: Defining dependency "member" 00:13:16.249 Message: lib/pcapng: Defining dependency "pcapng" 00:13:16.249 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:16.249 Message: lib/power: Defining dependency "power" 00:13:16.249 Message: lib/rawdev: Defining dependency "rawdev" 00:13:16.249 Message: lib/regexdev: Defining dependency "regexdev" 00:13:16.249 Message: lib/mldev: Defining dependency "mldev" 00:13:16.249 Message: lib/rib: Defining dependency "rib" 00:13:16.250 Message: lib/reorder: Defining dependency "reorder" 00:13:16.250 Message: lib/sched: Defining dependency "sched" 00:13:16.250 Message: lib/security: Defining dependency "security" 00:13:16.250 Message: lib/stack: Defining dependency "stack" 00:13:16.250 Has header "linux/userfaultfd.h" : YES 00:13:16.250 Has header "linux/vduse.h" : YES 00:13:16.250 Message: lib/vhost: Defining dependency "vhost" 00:13:16.250 Message: lib/ipsec: Defining dependency "ipsec" 00:13:16.250 Message: lib/pdcp: Defining dependency "pdcp" 00:13:16.250 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:16.250 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:13:16.250 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:13:16.250 Compiler for C supports arguments -mavx512bw: YES (cached) 00:13:16.250 Message: lib/fib: Defining dependency "fib" 00:13:16.250 Message: lib/port: Defining dependency "port" 00:13:16.250 Message: lib/pdump: Defining dependency "pdump" 00:13:16.250 Message: lib/table: Defining dependency "table" 00:13:16.250 Message: lib/pipeline: Defining dependency "pipeline" 00:13:16.250 Message: lib/graph: Defining dependency "graph" 00:13:16.250 Message: lib/node: Defining dependency "node" 00:13:16.250 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:18.150 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:18.150 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:18.150 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:18.150 Compiler for C supports arguments -Wno-sign-compare: YES 00:13:18.150 Compiler for C supports arguments -Wno-unused-value: YES 00:13:18.150 Compiler for C supports arguments -Wno-format: YES 00:13:18.150 Compiler for C supports arguments -Wno-format-security: YES 00:13:18.150 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:13:18.150 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:13:18.150 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:13:18.150 Compiler for C supports arguments -Wno-unused-parameter: YES 00:13:18.150 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:18.150 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:18.150 Compiler for C supports arguments -mavx512bw: YES (cached) 00:13:18.150 Compiler for C supports arguments -march=skylake-avx512: YES 00:13:18.150 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:13:18.150 Has header "sys/epoll.h" : YES 00:13:18.150 Program doxygen found: YES (/usr/bin/doxygen) 00:13:18.150 Configuring doxy-api-html.conf using configuration 00:13:18.150 Configuring doxy-api-man.conf using configuration 00:13:18.150 Program mandb found: YES (/usr/bin/mandb) 00:13:18.150 Program sphinx-build found: NO 00:13:18.150 Configuring rte_build_config.h using configuration 00:13:18.150 Message: 00:13:18.150 ================= 00:13:18.150 Applications Enabled 00:13:18.150 ================= 00:13:18.150 00:13:18.150 apps: 00:13:18.150 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:13:18.150 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:13:18.150 test-pmd, test-regex, test-sad, test-security-perf, 00:13:18.150 00:13:18.150 Message: 00:13:18.150 ================= 00:13:18.150 Libraries Enabled 00:13:18.150 ================= 00:13:18.150 00:13:18.150 libs: 00:13:18.150 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:18.150 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:13:18.150 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:13:18.150 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:13:18.150 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:13:18.150 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:13:18.150 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:13:18.150 00:13:18.150 00:13:18.150 Message: 00:13:18.150 =============== 00:13:18.150 Drivers Enabled 00:13:18.150 =============== 00:13:18.150 00:13:18.150 common: 00:13:18.150 00:13:18.150 bus: 00:13:18.150 pci, vdev, 00:13:18.150 mempool: 00:13:18.150 ring, 00:13:18.150 dma: 00:13:18.150 00:13:18.150 net: 00:13:18.150 i40e, 00:13:18.150 raw: 00:13:18.150 00:13:18.150 crypto: 00:13:18.150 00:13:18.150 compress: 00:13:18.150 00:13:18.150 regex: 00:13:18.150 00:13:18.150 ml: 00:13:18.150 00:13:18.150 vdpa: 00:13:18.150 00:13:18.150 event: 00:13:18.150 00:13:18.150 baseband: 00:13:18.150 00:13:18.150 gpu: 00:13:18.150 00:13:18.150 00:13:18.150 Message: 00:13:18.150 ================= 00:13:18.150 Content Skipped 00:13:18.150 ================= 00:13:18.150 00:13:18.150 apps: 00:13:18.150 00:13:18.150 libs: 00:13:18.150 00:13:18.150 drivers: 00:13:18.150 common/cpt: not in enabled drivers build config 00:13:18.150 common/dpaax: not in enabled drivers build config 00:13:18.150 common/iavf: not in enabled drivers build config 00:13:18.151 common/idpf: not in enabled drivers build config 00:13:18.151 common/mvep: not in enabled drivers build config 00:13:18.151 common/octeontx: not in enabled drivers build config 00:13:18.151 bus/auxiliary: not in enabled drivers build config 00:13:18.151 bus/cdx: not in enabled drivers build config 00:13:18.151 bus/dpaa: not in enabled drivers build config 00:13:18.151 bus/fslmc: not in enabled drivers build config 00:13:18.151 bus/ifpga: not in enabled drivers build config 00:13:18.151 bus/platform: not in enabled drivers build config 00:13:18.151 bus/vmbus: not in enabled drivers build config 00:13:18.151 common/cnxk: not in enabled drivers build config 00:13:18.151 common/mlx5: not in enabled drivers build config 00:13:18.151 common/nfp: not in enabled drivers build config 00:13:18.151 common/qat: not in enabled drivers build config 00:13:18.151 common/sfc_efx: not in enabled drivers build config 00:13:18.151 mempool/bucket: not in enabled drivers build config 00:13:18.151 mempool/cnxk: not in enabled drivers build config 00:13:18.151 mempool/dpaa: not in enabled drivers build config 00:13:18.151 mempool/dpaa2: not in enabled drivers build config 00:13:18.151 mempool/octeontx: not in enabled drivers build config 00:13:18.151 mempool/stack: not in enabled drivers build config 00:13:18.151 dma/cnxk: not in enabled drivers build config 00:13:18.151 dma/dpaa: not in enabled drivers build config 00:13:18.151 dma/dpaa2: not in enabled drivers build config 00:13:18.151 dma/hisilicon: not in enabled drivers build config 00:13:18.151 dma/idxd: not in enabled drivers build config 00:13:18.151 dma/ioat: not in enabled drivers build config 00:13:18.151 dma/skeleton: not in enabled drivers build config 00:13:18.151 net/af_packet: not in enabled drivers build config 00:13:18.151 net/af_xdp: not in enabled drivers build config 00:13:18.151 net/ark: not in enabled drivers build config 00:13:18.151 net/atlantic: not in enabled drivers build config 00:13:18.151 net/avp: not in enabled drivers build config 00:13:18.151 net/axgbe: not in enabled drivers build config 00:13:18.151 net/bnx2x: not in enabled drivers build config 00:13:18.151 net/bnxt: not in enabled drivers build config 00:13:18.151 net/bonding: not in enabled drivers build config 00:13:18.151 net/cnxk: not in enabled drivers build config 00:13:18.151 net/cpfl: not in enabled drivers build config 00:13:18.151 net/cxgbe: not in enabled drivers build config 00:13:18.151 net/dpaa: not in enabled drivers build config 00:13:18.151 net/dpaa2: not in enabled drivers build config 00:13:18.151 net/e1000: not in enabled drivers build config 00:13:18.151 net/ena: not in enabled drivers build config 00:13:18.151 net/enetc: not in enabled drivers build config 00:13:18.151 net/enetfec: not in enabled drivers build config 00:13:18.151 net/enic: not in enabled drivers build config 00:13:18.151 net/failsafe: not in enabled drivers build config 00:13:18.151 net/fm10k: not in enabled drivers build config 00:13:18.151 net/gve: not in enabled drivers build config 00:13:18.151 net/hinic: not in enabled drivers build config 00:13:18.151 net/hns3: not in enabled drivers build config 00:13:18.151 net/iavf: not in enabled drivers build config 00:13:18.151 net/ice: not in enabled drivers build config 00:13:18.151 net/idpf: not in enabled drivers build config 00:13:18.151 net/igc: not in enabled drivers build config 00:13:18.151 net/ionic: not in enabled drivers build config 00:13:18.151 net/ipn3ke: not in enabled drivers build config 00:13:18.151 net/ixgbe: not in enabled drivers build config 00:13:18.151 net/mana: not in enabled drivers build config 00:13:18.151 net/memif: not in enabled drivers build config 00:13:18.151 net/mlx4: not in enabled drivers build config 00:13:18.151 net/mlx5: not in enabled drivers build config 00:13:18.151 net/mvneta: not in enabled drivers build config 00:13:18.151 net/mvpp2: not in enabled drivers build config 00:13:18.151 net/netvsc: not in enabled drivers build config 00:13:18.151 net/nfb: not in enabled drivers build config 00:13:18.151 net/nfp: not in enabled drivers build config 00:13:18.151 net/ngbe: not in enabled drivers build config 00:13:18.151 net/null: not in enabled drivers build config 00:13:18.151 net/octeontx: not in enabled drivers build config 00:13:18.151 net/octeon_ep: not in enabled drivers build config 00:13:18.151 net/pcap: not in enabled drivers build config 00:13:18.151 net/pfe: not in enabled drivers build config 00:13:18.151 net/qede: not in enabled drivers build config 00:13:18.151 net/ring: not in enabled drivers build config 00:13:18.151 net/sfc: not in enabled drivers build config 00:13:18.151 net/softnic: not in enabled drivers build config 00:13:18.151 net/tap: not in enabled drivers build config 00:13:18.151 net/thunderx: not in enabled drivers build config 00:13:18.151 net/txgbe: not in enabled drivers build config 00:13:18.151 net/vdev_netvsc: not in enabled drivers build config 00:13:18.151 net/vhost: not in enabled drivers build config 00:13:18.151 net/virtio: not in enabled drivers build config 00:13:18.151 net/vmxnet3: not in enabled drivers build config 00:13:18.151 raw/cnxk_bphy: not in enabled drivers build config 00:13:18.151 raw/cnxk_gpio: not in enabled drivers build config 00:13:18.151 raw/dpaa2_cmdif: not in enabled drivers build config 00:13:18.151 raw/ifpga: not in enabled drivers build config 00:13:18.151 raw/ntb: not in enabled drivers build config 00:13:18.151 raw/skeleton: not in enabled drivers build config 00:13:18.151 crypto/armv8: not in enabled drivers build config 00:13:18.151 crypto/bcmfs: not in enabled drivers build config 00:13:18.151 crypto/caam_jr: not in enabled drivers build config 00:13:18.151 crypto/ccp: not in enabled drivers build config 00:13:18.151 crypto/cnxk: not in enabled drivers build config 00:13:18.151 crypto/dpaa_sec: not in enabled drivers build config 00:13:18.151 crypto/dpaa2_sec: not in enabled drivers build config 00:13:18.151 crypto/ipsec_mb: not in enabled drivers build config 00:13:18.151 crypto/mlx5: not in enabled drivers build config 00:13:18.151 crypto/mvsam: not in enabled drivers build config 00:13:18.151 crypto/nitrox: not in enabled drivers build config 00:13:18.151 crypto/null: not in enabled drivers build config 00:13:18.151 crypto/octeontx: not in enabled drivers build config 00:13:18.151 crypto/openssl: not in enabled drivers build config 00:13:18.151 crypto/scheduler: not in enabled drivers build config 00:13:18.151 crypto/uadk: not in enabled drivers build config 00:13:18.151 crypto/virtio: not in enabled drivers build config 00:13:18.151 compress/isal: not in enabled drivers build config 00:13:18.151 compress/mlx5: not in enabled drivers build config 00:13:18.151 compress/octeontx: not in enabled drivers build config 00:13:18.151 compress/zlib: not in enabled drivers build config 00:13:18.151 regex/mlx5: not in enabled drivers build config 00:13:18.151 regex/cn9k: not in enabled drivers build config 00:13:18.151 ml/cnxk: not in enabled drivers build config 00:13:18.151 vdpa/ifc: not in enabled drivers build config 00:13:18.151 vdpa/mlx5: not in enabled drivers build config 00:13:18.151 vdpa/nfp: not in enabled drivers build config 00:13:18.151 vdpa/sfc: not in enabled drivers build config 00:13:18.151 event/cnxk: not in enabled drivers build config 00:13:18.151 event/dlb2: not in enabled drivers build config 00:13:18.151 event/dpaa: not in enabled drivers build config 00:13:18.151 event/dpaa2: not in enabled drivers build config 00:13:18.151 event/dsw: not in enabled drivers build config 00:13:18.151 event/opdl: not in enabled drivers build config 00:13:18.151 event/skeleton: not in enabled drivers build config 00:13:18.151 event/sw: not in enabled drivers build config 00:13:18.151 event/octeontx: not in enabled drivers build config 00:13:18.151 baseband/acc: not in enabled drivers build config 00:13:18.151 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:13:18.151 baseband/fpga_lte_fec: not in enabled drivers build config 00:13:18.151 baseband/la12xx: not in enabled drivers build config 00:13:18.151 baseband/null: not in enabled drivers build config 00:13:18.151 baseband/turbo_sw: not in enabled drivers build config 00:13:18.151 gpu/cuda: not in enabled drivers build config 00:13:18.151 00:13:18.151 00:13:18.151 Build targets in project: 220 00:13:18.151 00:13:18.151 DPDK 23.11.0 00:13:18.151 00:13:18.151 User defined options 00:13:18.151 libdir : lib 00:13:18.151 prefix : /home/vagrant/spdk_repo/dpdk/build 00:13:18.151 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:13:18.151 c_link_args : 00:13:18.151 enable_docs : false 00:13:18.151 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:13:18.151 enable_kmods : false 00:13:18.151 machine : native 00:13:18.151 tests : false 00:13:18.151 00:13:18.151 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:18.152 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:13:18.152 21:26:38 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:13:18.152 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:13:18.152 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:18.152 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:18.152 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:18.409 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:18.410 [5/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:18.410 [6/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:18.410 [7/710] Linking static target lib/librte_kvargs.a 00:13:18.410 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:18.410 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:18.410 [10/710] Linking static target lib/librte_log.a 00:13:18.667 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:18.925 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:18.925 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:18.925 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:18.925 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:18.925 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:18.925 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:18.925 [18/710] Linking target lib/librte_log.so.24.0 00:13:19.182 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:19.439 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:19.439 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:13:19.439 [22/710] Linking target lib/librte_kvargs.so.24.0 00:13:19.439 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:19.696 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:19.696 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:19.696 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:19.696 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:19.696 [28/710] Linking static target lib/librte_telemetry.a 00:13:19.696 [29/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:13:19.696 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:19.696 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:19.954 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:20.212 [33/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:20.212 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:20.212 [35/710] Linking target lib/librte_telemetry.so.24.0 00:13:20.212 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:20.212 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:20.212 [38/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:13:20.212 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:20.212 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:20.212 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:20.212 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:20.212 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:20.212 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:20.470 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:20.470 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:20.727 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:20.727 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:20.727 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:20.985 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:20.985 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:20.985 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:20.985 [53/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:21.243 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:21.243 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:21.243 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:21.243 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:21.243 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:21.243 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:21.501 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:21.501 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:21.501 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:21.501 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:21.501 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:21.759 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:21.759 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:21.759 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:21.759 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:22.018 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:22.018 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:22.018 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:22.018 [72/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:22.018 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:22.280 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:22.280 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:22.280 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:22.280 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:22.281 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:22.540 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:22.797 [80/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:22.797 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:22.797 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:22.797 [83/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:22.797 [84/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:22.797 [85/710] Linking static target lib/librte_ring.a 00:13:23.055 [86/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:23.055 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:23.055 [88/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:23.055 [89/710] Linking static target lib/librte_eal.a 00:13:23.313 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:23.313 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:23.313 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:23.571 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:23.571 [94/710] Linking static target lib/librte_mempool.a 00:13:23.571 [95/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:23.571 [96/710] Linking static target lib/librte_rcu.a 00:13:23.571 [97/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:23.846 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:23.846 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:23.846 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:23.846 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:24.141 [102/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:24.141 [103/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:24.141 [104/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.141 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:24.141 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:24.399 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:24.399 [108/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:24.399 [109/710] Linking static target lib/librte_mbuf.a 00:13:24.399 [110/710] Linking static target lib/librte_meter.a 00:13:24.657 [111/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:24.657 [112/710] Linking static target lib/librte_net.a 00:13:24.657 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:24.657 [114/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.657 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:24.915 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:24.915 [117/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.915 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:25.174 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:25.432 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:25.689 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:25.948 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:25.948 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:25.948 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:25.948 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:25.948 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:25.948 [127/710] Linking static target lib/librte_pci.a 00:13:26.206 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:26.206 [129/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:26.206 [130/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:26.206 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:26.206 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:26.465 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:26.465 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:26.465 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:26.465 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:26.465 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:26.465 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:26.465 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:26.465 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:26.750 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:26.750 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:26.750 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:27.010 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:27.010 [145/710] Linking static target lib/librte_cmdline.a 00:13:27.010 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:13:27.269 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:13:27.269 [148/710] Linking static target lib/librte_metrics.a 00:13:27.269 [149/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:27.269 [150/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:27.531 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:13:27.789 [152/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:27.789 [153/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:27.789 [154/710] Linking static target lib/librte_timer.a 00:13:28.047 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:28.047 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:28.612 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:13:28.612 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:13:28.612 [159/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:13:28.870 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:13:29.436 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:29.436 [162/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:13:29.437 [163/710] Linking static target lib/librte_bitratestats.a 00:13:29.437 [164/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:13:29.437 [165/710] Linking static target lib/librte_ethdev.a 00:13:29.437 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:13:29.437 [167/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:13:29.695 [168/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:29.695 [169/710] Linking static target lib/librte_hash.a 00:13:29.695 [170/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:13:29.695 [171/710] Linking static target lib/librte_bbdev.a 00:13:29.953 [172/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:29.953 [173/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:13:29.953 [174/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:13:29.953 [175/710] Linking static target lib/acl/libavx2_tmp.a 00:13:29.953 [176/710] Linking target lib/librte_eal.so.24.0 00:13:30.211 [177/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:13:30.211 [178/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:13:30.211 [179/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:30.211 [180/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:13:30.211 [181/710] Linking target lib/librte_meter.so.24.0 00:13:30.211 [182/710] Linking target lib/librte_ring.so.24.0 00:13:30.211 [183/710] Linking target lib/librte_pci.so.24.0 00:13:30.469 [184/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:30.469 [185/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:13:30.469 [186/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:13:30.469 [187/710] Linking target lib/librte_timer.so.24.0 00:13:30.469 [188/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:13:30.469 [189/710] Linking target lib/librte_rcu.so.24.0 00:13:30.469 [190/710] Linking target lib/librte_mempool.so.24.0 00:13:30.469 [191/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:13:30.469 [192/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:13:30.469 [193/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:13:30.726 [194/710] Linking target lib/librte_mbuf.so.24.0 00:13:30.726 [195/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:13:30.726 [196/710] Linking static target lib/acl/libavx512_tmp.a 00:13:30.726 [197/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:13:30.726 [198/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:13:30.726 [199/710] Linking target lib/librte_net.so.24.0 00:13:30.726 [200/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:13:30.726 [201/710] Linking static target lib/librte_acl.a 00:13:30.984 [202/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:13:30.984 [203/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:13:30.984 [204/710] Linking target lib/librte_cmdline.so.24.0 00:13:30.984 [205/710] Linking target lib/librte_hash.so.24.0 00:13:30.984 [206/710] Linking target lib/librte_bbdev.so.24.0 00:13:30.984 [207/710] Linking static target lib/librte_cfgfile.a 00:13:30.984 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:13:31.242 [209/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:13:31.242 [210/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:13:31.242 [211/710] Linking target lib/librte_acl.so.24.0 00:13:31.242 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:13:31.242 [213/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:13:31.242 [214/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:13:31.500 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:13:31.500 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:13:31.500 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:13:31.500 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:31.759 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:31.759 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:13:31.759 [221/710] Linking static target lib/librte_bpf.a 00:13:32.019 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:32.019 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:32.019 [224/710] Linking static target lib/librte_compressdev.a 00:13:32.019 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:32.276 [226/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:13:32.276 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:13:32.276 [228/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:32.533 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:13:32.533 [230/710] Linking static target lib/librte_distributor.a 00:13:32.533 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:32.533 [232/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:32.533 [233/710] Linking target lib/librte_compressdev.so.24.0 00:13:32.792 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:13:32.792 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:32.792 [236/710] Linking static target lib/librte_dmadev.a 00:13:32.792 [237/710] Linking target lib/librte_distributor.so.24.0 00:13:32.792 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:13:33.050 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:33.050 [240/710] Linking target lib/librte_dmadev.so.24.0 00:13:33.308 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:13:33.308 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:13:33.566 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:13:33.824 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:13:33.824 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:13:33.824 [246/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:13:33.824 [247/710] Linking static target lib/librte_efd.a 00:13:34.081 [248/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:34.081 [249/710] Linking static target lib/librte_cryptodev.a 00:13:34.081 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:13:34.337 [251/710] Linking target lib/librte_efd.so.24.0 00:13:34.337 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:13:34.594 [253/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:34.594 [254/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:13:34.594 [255/710] Linking static target lib/librte_dispatcher.a 00:13:34.594 [256/710] Linking target lib/librte_ethdev.so.24.0 00:13:34.594 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:13:34.594 [258/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:13:34.594 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:13:34.852 [260/710] Linking static target lib/librte_gpudev.a 00:13:34.852 [261/710] Linking target lib/librte_metrics.so.24.0 00:13:34.852 [262/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:13:34.852 [263/710] Linking target lib/librte_bpf.so.24.0 00:13:34.852 [264/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:13:34.852 [265/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:13:34.852 [266/710] Linking target lib/librte_bitratestats.so.24.0 00:13:34.852 [267/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:13:35.111 [268/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:13:35.111 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:13:35.111 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:13:35.369 [271/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:35.369 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:13:35.369 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:13:35.673 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:35.673 [275/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:13:35.673 [276/710] Linking target lib/librte_gpudev.so.24.0 00:13:35.673 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:13:35.673 [278/710] Linking static target lib/librte_eventdev.a 00:13:35.673 [279/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:13:35.673 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:13:35.932 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:13:35.932 [282/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:13:35.932 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:13:35.932 [284/710] Linking static target lib/librte_gro.a 00:13:35.932 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:13:36.189 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:13:36.189 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:13:36.189 [288/710] Linking target lib/librte_gro.so.24.0 00:13:36.189 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:13:36.189 [290/710] Linking static target lib/librte_gso.a 00:13:36.447 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:13:36.447 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:13:36.447 [293/710] Linking target lib/librte_gso.so.24.0 00:13:36.447 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:13:36.704 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:13:36.704 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:13:36.704 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:13:36.704 [298/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:13:36.704 [299/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:13:36.704 [300/710] Linking static target lib/librte_jobstats.a 00:13:36.704 [301/710] Linking static target lib/librte_ip_frag.a 00:13:36.962 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:13:36.962 [303/710] Linking static target lib/librte_latencystats.a 00:13:36.962 [304/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:13:37.220 [305/710] Linking target lib/librte_ip_frag.so.24.0 00:13:37.220 [306/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:13:37.220 [307/710] Linking target lib/librte_jobstats.so.24.0 00:13:37.220 [308/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:13:37.220 [309/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:13:37.220 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:13:37.220 [311/710] Linking target lib/librte_latencystats.so.24.0 00:13:37.220 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:13:37.220 [313/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:13:37.479 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:13:37.479 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:37.479 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:37.479 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:38.043 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:38.043 [319/710] Linking target lib/librte_eventdev.so.24.0 00:13:38.043 [320/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:38.043 [321/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:13:38.043 [322/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:38.043 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:13:38.301 [324/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:38.301 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:13:38.301 [326/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:13:38.301 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:13:38.301 [328/710] Linking static target lib/librte_lpm.a 00:13:38.301 [329/710] Linking static target lib/librte_pcapng.a 00:13:38.301 [330/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:38.301 [331/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:13:38.559 [332/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:13:38.559 [333/710] Linking target lib/librte_pcapng.so.24.0 00:13:38.559 [334/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:13:38.559 [335/710] Linking target lib/librte_lpm.so.24.0 00:13:38.816 [336/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:13:38.816 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:38.816 [338/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:13:38.816 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:39.074 [340/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:13:39.074 [341/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:39.074 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:39.074 [343/710] Linking static target lib/librte_power.a 00:13:39.074 [344/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:13:39.074 [345/710] Linking static target lib/librte_member.a 00:13:39.074 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:13:39.074 [347/710] Linking static target lib/librte_regexdev.a 00:13:39.074 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:13:39.074 [349/710] Linking static target lib/librte_rawdev.a 00:13:39.332 [350/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:13:39.332 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:13:39.332 [352/710] Linking target lib/librte_member.so.24.0 00:13:39.332 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:13:39.590 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:13:39.590 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:13:39.590 [356/710] Linking static target lib/librte_mldev.a 00:13:39.590 [357/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:39.590 [358/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:13:39.590 [359/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:39.590 [360/710] Linking target lib/librte_power.so.24.0 00:13:39.590 [361/710] Linking target lib/librte_rawdev.so.24.0 00:13:39.848 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:13:39.848 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:39.848 [364/710] Linking target lib/librte_regexdev.so.24.0 00:13:40.106 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:13:40.107 [366/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:13:40.107 [367/710] Linking static target lib/librte_rib.a 00:13:40.107 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:40.107 [369/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:40.365 [370/710] Linking static target lib/librte_reorder.a 00:13:40.365 [371/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:13:40.365 [372/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:13:40.365 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:13:40.365 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:13:40.365 [375/710] Linking static target lib/librte_stack.a 00:13:40.622 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:40.622 [377/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:40.622 [378/710] Linking static target lib/librte_security.a 00:13:40.622 [379/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:13:40.622 [380/710] Linking target lib/librte_reorder.so.24.0 00:13:40.622 [381/710] Linking target lib/librte_rib.so.24.0 00:13:40.622 [382/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:13:40.622 [383/710] Linking target lib/librte_stack.so.24.0 00:13:40.622 [384/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:13:40.880 [385/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:13:40.880 [386/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:40.880 [387/710] Linking target lib/librte_mldev.so.24.0 00:13:40.880 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:41.139 [389/710] Linking target lib/librte_security.so.24.0 00:13:41.139 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:41.139 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:41.139 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:13:41.397 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:41.397 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:13:41.397 [395/710] Linking static target lib/librte_sched.a 00:13:41.656 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:41.656 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:13:41.913 [398/710] Linking target lib/librte_sched.so.24.0 00:13:41.914 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:41.914 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:13:41.914 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:42.170 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:13:42.170 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:13:42.427 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:42.684 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:13:42.684 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:13:42.684 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:13:42.942 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:13:42.942 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:13:42.942 [410/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:13:43.199 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:13:43.200 [412/710] Linking static target lib/librte_ipsec.a 00:13:43.200 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:13:43.458 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:13:43.458 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:13:43.458 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:13:43.458 [417/710] Linking target lib/librte_ipsec.so.24.0 00:13:43.458 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:13:43.458 [419/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:13:43.458 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:13:43.458 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:13:43.715 [422/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:13:43.715 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:13:44.647 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:13:44.647 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:13:44.647 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:13:44.647 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:13:44.647 [428/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:13:44.647 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:13:44.647 [430/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:13:44.647 [431/710] Linking static target lib/librte_fib.a 00:13:44.647 [432/710] Linking static target lib/librte_pdcp.a 00:13:44.905 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:13:44.905 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:13:44.905 [435/710] Linking target lib/librte_pdcp.so.24.0 00:13:44.905 [436/710] Linking target lib/librte_fib.so.24.0 00:13:45.163 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:13:45.422 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:13:45.680 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:13:45.680 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:13:45.680 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:13:45.680 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:13:45.938 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:13:45.938 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:13:46.197 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:13:46.197 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:13:46.197 [447/710] Linking static target lib/librte_port.a 00:13:46.455 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:13:46.455 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:13:46.455 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:13:46.455 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:13:46.713 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:13:46.713 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:13:46.713 [454/710] Linking target lib/librte_port.so.24.0 00:13:46.713 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:13:46.713 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:13:46.971 [457/710] Linking static target lib/librte_pdump.a 00:13:46.971 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:13:46.971 [459/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:13:47.229 [460/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:47.229 [461/710] Linking target lib/librte_pdump.so.24.0 00:13:47.229 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:13:47.487 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:13:47.744 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:13:47.745 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:13:47.745 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:13:47.745 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:13:47.745 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:13:48.003 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:13:48.262 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:13:48.262 [471/710] Linking static target lib/librte_table.a 00:13:48.262 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:13:48.262 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:13:48.827 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:13:48.827 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:13:48.827 [476/710] Linking target lib/librte_table.so.24.0 00:13:48.827 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:13:49.084 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:13:49.084 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:13:49.398 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:13:49.398 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:13:49.661 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:13:49.661 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:13:49.661 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:13:49.920 [485/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:13:49.920 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:13:50.178 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:13:50.437 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:13:50.437 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:13:50.437 [490/710] Linking static target lib/librte_graph.a 00:13:50.437 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:13:50.695 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:13:50.695 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:13:50.953 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:13:51.211 [495/710] Linking target lib/librte_graph.so.24.0 00:13:51.211 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:13:51.211 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:13:51.211 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:13:51.211 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:13:51.471 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:13:51.729 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:13:51.729 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:13:51.729 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:13:51.988 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:13:51.988 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:51.988 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:13:52.245 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:13:52.245 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:13:52.502 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:52.502 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:52.758 [511/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:13:52.758 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:52.758 [513/710] Linking static target lib/librte_node.a 00:13:52.758 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:52.759 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:53.016 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.016 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:53.016 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:53.016 [519/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:53.016 [520/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:53.016 [521/710] Linking target lib/librte_node.so.24.0 00:13:53.274 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:53.274 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:53.274 [524/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:53.274 [525/710] Linking static target drivers/librte_bus_vdev.a 00:13:53.274 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:53.274 [527/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:53.274 [528/710] Linking static target drivers/librte_bus_pci.a 00:13:53.531 [529/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.531 [530/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:13:53.531 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:53.819 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:13:53.819 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:13:53.819 [534/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:13:53.819 [535/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.819 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:13:53.819 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:13:53.819 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:53.819 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:54.076 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:13:54.076 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:54.076 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:54.076 [543/710] Linking static target drivers/librte_mempool_ring.a 00:13:54.076 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:54.076 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:13:54.333 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:13:54.605 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:13:54.881 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:13:55.139 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:13:55.139 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:13:55.139 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:13:56.074 [552/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:13:56.074 [553/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:13:56.074 [554/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:13:56.074 [555/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:13:56.074 [556/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:13:56.074 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:13:56.641 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:13:56.641 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:13:56.899 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:13:56.899 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:13:56.899 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:13:57.465 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:13:57.465 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:13:57.724 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:13:57.724 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:13:58.289 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:13:58.289 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:13:58.289 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:13:58.290 [570/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:13:58.290 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:13:58.290 [572/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:13:58.290 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:13:58.855 [574/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:13:58.855 [575/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:13:58.855 [576/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:13:58.855 [577/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:58.855 [578/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:13:58.855 [579/710] Linking static target lib/librte_vhost.a 00:13:59.114 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:13:59.372 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:13:59.372 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:13:59.630 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:13:59.630 [584/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:13:59.631 [585/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:13:59.631 [586/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:13:59.631 [587/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:13:59.631 [588/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:13:59.631 [589/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:13:59.631 [590/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:13:59.631 [591/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:13:59.888 [592/710] Linking static target drivers/librte_net_i40e.a 00:14:00.146 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.146 [594/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:14:00.146 [595/710] Linking target lib/librte_vhost.so.24.0 00:14:00.404 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:14:00.404 [597/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.404 [598/710] Linking target drivers/librte_net_i40e.so.24.0 00:14:00.404 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:14:00.662 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:14:00.920 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:14:00.920 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:14:00.920 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:14:01.178 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:14:01.178 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:14:01.438 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:14:01.438 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:14:01.696 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:14:01.696 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:14:01.954 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:14:01.954 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:14:01.954 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:14:01.954 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:14:02.211 [614/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:14:02.211 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:14:02.211 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:14:02.211 [617/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:14:02.469 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:14:02.728 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:14:02.987 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:14:02.987 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:14:02.987 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:14:03.246 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:14:04.182 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:14:04.182 [625/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:14:04.182 [626/710] Linking static target lib/librte_pipeline.a 00:14:04.182 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:14:04.182 [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:14:04.182 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:14:04.182 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:14:04.182 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:14:04.447 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:14:04.706 [633/710] Linking target app/dpdk-graph 00:14:04.707 [634/710] Linking target app/dpdk-dumpcap 00:14:04.707 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:14:04.707 [636/710] Linking target app/dpdk-pdump 00:14:04.707 [637/710] Linking target app/dpdk-proc-info 00:14:04.707 [638/710] Linking target app/dpdk-test-acl 00:14:04.707 [639/710] Linking target app/dpdk-test-cmdline 00:14:05.273 [640/710] Linking target app/dpdk-test-dma-perf 00:14:05.273 [641/710] Linking target app/dpdk-test-compress-perf 00:14:05.273 [642/710] Linking target app/dpdk-test-crypto-perf 00:14:05.273 [643/710] Linking target app/dpdk-test-fib 00:14:05.273 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:14:05.532 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:14:05.532 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:14:05.532 [647/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:14:05.790 [648/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:14:05.790 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:14:05.790 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:14:06.048 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:14:06.048 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:14:06.048 [653/710] Linking target app/dpdk-test-gpudev 00:14:06.048 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:14:06.306 [655/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:14:06.306 [656/710] Linking target app/dpdk-test-eventdev 00:14:06.306 [657/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:14:06.572 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:14:06.572 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:14:06.572 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:14:06.572 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:14:06.844 [662/710] Linking target app/dpdk-test-flow-perf 00:14:06.844 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:14:06.844 [664/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:14:06.844 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:14:06.844 [666/710] Linking target lib/librte_pipeline.so.24.0 00:14:06.844 [667/710] Linking target app/dpdk-test-bbdev 00:14:07.102 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:14:07.359 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:14:07.359 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:14:07.359 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:14:07.359 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:14:07.359 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:14:07.615 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:14:07.874 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:14:07.874 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:14:08.131 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:14:08.131 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:14:08.389 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:14:08.389 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:14:08.389 [681/710] Linking target app/dpdk-test-pipeline 00:14:08.646 [682/710] Linking target app/dpdk-test-mldev 00:14:08.646 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:14:09.210 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:14:09.210 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:14:09.210 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:14:09.210 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:14:09.468 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:14:09.468 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:14:09.468 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:14:09.726 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:14:09.726 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:14:09.983 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:14:10.241 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:14:10.498 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:14:10.498 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:14:10.756 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:14:11.014 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:14:11.014 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:14:11.014 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:14:11.272 [701/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:14:11.272 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:14:11.272 [703/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:14:11.529 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:14:11.529 [705/710] Linking target app/dpdk-test-regex 00:14:11.529 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:14:11.787 [707/710] Linking target app/dpdk-test-sad 00:14:11.787 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:14:12.379 [709/710] Linking target app/dpdk-testpmd 00:14:12.379 [710/710] Linking target app/dpdk-test-security-perf 00:14:12.379 21:27:33 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:14:12.379 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:14:12.379 [0/1] Installing files. 00:14:12.681 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:14:12.682 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:14:12.944 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:14:12.944 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:14:12.944 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:14:12.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:14:12.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:14:12.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:12.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:14:12.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:14:12.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:14:12.948 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.948 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:12.949 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.208 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.208 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.208 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.208 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:14:13.208 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.208 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:14:13.208 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.208 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:14:13.208 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.208 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:14:13.208 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.208 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.209 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.470 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.471 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:14:13.472 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:14:13.472 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:14:13.472 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:14:13.472 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:14:13.472 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:14:13.472 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:14:13.472 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:14:13.472 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:14:13.472 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:14:13.472 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:14:13.472 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:14:13.472 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:14:13.472 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:14:13.472 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:14:13.472 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:14:13.472 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:14:13.472 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:14:13.472 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:14:13.472 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:14:13.472 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:14:13.472 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:14:13.472 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:14:13.472 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:14:13.472 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:14:13.472 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:14:13.472 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:14:13.472 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:14:13.472 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:14:13.472 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:14:13.472 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:14:13.472 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:14:13.472 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:14:13.472 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:14:13.472 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:14:13.472 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:14:13.472 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:14:13.472 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:14:13.472 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:14:13.472 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:14:13.472 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:14:13.472 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:14:13.472 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:14:13.472 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:14:13.472 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:14:13.472 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:14:13.472 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:14:13.472 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:14:13.472 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:14:13.472 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:14:13.472 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:14:13.472 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:14:13.472 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:14:13.472 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:14:13.472 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:14:13.472 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:14:13.472 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:14:13.472 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:14:13.472 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:14:13.472 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:14:13.472 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:14:13.472 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:14:13.472 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:14:13.472 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:14:13.472 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:14:13.472 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:14:13.472 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:14:13.472 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:14:13.472 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:14:13.472 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:14:13.472 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:14:13.472 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:14:13.472 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:14:13.473 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:14:13.473 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:14:13.473 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:14:13.473 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:14:13.473 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:14:13.473 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:14:13.473 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:14:13.473 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:14:13.473 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:14:13.473 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:14:13.473 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:14:13.473 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:14:13.473 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:14:13.473 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:14:13.473 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:14:13.473 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:14:13.473 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:14:13.473 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:14:13.473 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:14:13.473 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:14:13.473 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:14:13.473 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:14:13.473 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:14:13.473 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:14:13.473 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:14:13.473 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:14:13.473 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:14:13.473 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:14:13.473 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:14:13.473 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:14:13.473 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:14:13.473 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:14:13.473 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:14:13.473 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:14:13.473 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:14:13.473 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:14:13.473 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:14:13.473 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:14:13.473 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:14:13.473 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:14:13.473 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:14:13.473 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:14:13.473 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:14:13.473 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:14:13.473 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:14:13.473 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:14:13.473 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:14:13.473 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:14:13.473 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:14:13.473 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:14:13.473 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:14:13.473 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:14:13.473 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:14:13.473 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:14:13.473 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:14:13.473 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:14:13.473 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:14:13.473 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:14:13.473 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:14:13.473 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:14:13.473 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:14:13.473 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:14:13.473 21:27:34 -- common/autobuild_common.sh@189 -- $ uname -s 00:14:13.473 21:27:34 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:14:13.473 21:27:34 -- common/autobuild_common.sh@200 -- $ cat 00:14:13.473 21:27:34 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:14:13.473 00:14:13.473 real 1m2.602s 00:14:13.473 user 7m36.917s 00:14:13.473 sys 1m13.236s 00:14:13.473 21:27:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:14:13.473 ************************************ 00:14:13.473 END TEST build_native_dpdk 00:14:13.473 ************************************ 00:14:13.473 21:27:34 -- common/autotest_common.sh@10 -- $ set +x 00:14:13.473 21:27:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:13.473 21:27:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:13.473 21:27:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:13.473 21:27:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:13.473 21:27:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:14:13.473 21:27:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:14:13.473 21:27:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:14:13.473 21:27:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:14:13.473 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:14:13.732 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:14:13.732 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:14:13.732 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:13.990 Using 'verbs' RDMA provider 00:14:27.566 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:14:42.444 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:14:42.444 Creating mk/config.mk...done. 00:14:42.444 Creating mk/cc.flags.mk...done. 00:14:42.444 Type 'make' to build. 00:14:42.444 21:28:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:14:42.444 21:28:01 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:14:42.444 21:28:01 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:14:42.444 21:28:01 -- common/autotest_common.sh@10 -- $ set +x 00:14:42.444 ************************************ 00:14:42.444 START TEST make 00:14:42.444 ************************************ 00:14:42.444 21:28:01 -- common/autotest_common.sh@1104 -- $ make -j10 00:14:42.444 make[1]: Nothing to be done for 'all'. 00:15:04.468 CC lib/ut_mock/mock.o 00:15:04.468 CC lib/log/log.o 00:15:04.468 CC lib/log/log_flags.o 00:15:04.468 CC lib/ut/ut.o 00:15:04.468 CC lib/log/log_deprecated.o 00:15:04.468 LIB libspdk_ut_mock.a 00:15:04.468 LIB libspdk_log.a 00:15:04.468 SO libspdk_ut_mock.so.5.0 00:15:04.468 LIB libspdk_ut.a 00:15:04.468 SO libspdk_log.so.6.1 00:15:04.468 SO libspdk_ut.so.1.0 00:15:04.468 SYMLINK libspdk_ut_mock.so 00:15:04.468 SYMLINK libspdk_ut.so 00:15:04.468 SYMLINK libspdk_log.so 00:15:04.468 CXX lib/trace_parser/trace.o 00:15:04.468 CC lib/ioat/ioat.o 00:15:04.468 CC lib/util/base64.o 00:15:04.468 CC lib/util/bit_array.o 00:15:04.468 CC lib/util/cpuset.o 00:15:04.468 CC lib/util/crc16.o 00:15:04.468 CC lib/util/crc32.o 00:15:04.468 CC lib/util/crc32c.o 00:15:04.468 CC lib/dma/dma.o 00:15:04.726 CC lib/vfio_user/host/vfio_user_pci.o 00:15:04.726 CC lib/util/crc32_ieee.o 00:15:04.726 CC lib/vfio_user/host/vfio_user.o 00:15:04.726 CC lib/util/crc64.o 00:15:04.726 CC lib/util/dif.o 00:15:04.726 LIB libspdk_dma.a 00:15:04.726 CC lib/util/fd.o 00:15:04.726 SO libspdk_dma.so.3.0 00:15:04.726 CC lib/util/file.o 00:15:04.726 LIB libspdk_ioat.a 00:15:04.726 CC lib/util/hexlify.o 00:15:04.726 SYMLINK libspdk_dma.so 00:15:04.726 CC lib/util/iov.o 00:15:04.726 SO libspdk_ioat.so.6.0 00:15:04.984 CC lib/util/math.o 00:15:04.984 SYMLINK libspdk_ioat.so 00:15:04.984 CC lib/util/pipe.o 00:15:04.984 CC lib/util/strerror_tls.o 00:15:04.984 CC lib/util/string.o 00:15:04.984 CC lib/util/uuid.o 00:15:04.984 LIB libspdk_vfio_user.a 00:15:04.984 CC lib/util/fd_group.o 00:15:04.984 SO libspdk_vfio_user.so.4.0 00:15:04.984 CC lib/util/xor.o 00:15:04.984 CC lib/util/zipf.o 00:15:04.984 SYMLINK libspdk_vfio_user.so 00:15:05.241 LIB libspdk_util.a 00:15:05.499 SO libspdk_util.so.8.0 00:15:05.499 SYMLINK libspdk_util.so 00:15:05.499 LIB libspdk_trace_parser.a 00:15:05.756 SO libspdk_trace_parser.so.4.0 00:15:05.756 CC lib/idxd/idxd.o 00:15:05.756 CC lib/idxd/idxd_user.o 00:15:05.756 CC lib/rdma/common.o 00:15:05.756 CC lib/idxd/idxd_kernel.o 00:15:05.756 CC lib/rdma/rdma_verbs.o 00:15:05.756 SYMLINK libspdk_trace_parser.so 00:15:05.756 CC lib/conf/conf.o 00:15:05.756 CC lib/vmd/vmd.o 00:15:05.756 CC lib/json/json_parse.o 00:15:05.756 CC lib/vmd/led.o 00:15:05.756 CC lib/env_dpdk/env.o 00:15:06.014 CC lib/env_dpdk/memory.o 00:15:06.014 CC lib/env_dpdk/pci.o 00:15:06.014 CC lib/json/json_util.o 00:15:06.014 CC lib/env_dpdk/init.o 00:15:06.014 LIB libspdk_conf.a 00:15:06.014 CC lib/json/json_write.o 00:15:06.014 LIB libspdk_rdma.a 00:15:06.014 SO libspdk_conf.so.5.0 00:15:06.014 SO libspdk_rdma.so.5.0 00:15:06.014 SYMLINK libspdk_conf.so 00:15:06.014 CC lib/env_dpdk/threads.o 00:15:06.271 SYMLINK libspdk_rdma.so 00:15:06.271 CC lib/env_dpdk/pci_ioat.o 00:15:06.271 CC lib/env_dpdk/pci_virtio.o 00:15:06.271 CC lib/env_dpdk/pci_vmd.o 00:15:06.271 CC lib/env_dpdk/pci_idxd.o 00:15:06.271 LIB libspdk_json.a 00:15:06.529 CC lib/env_dpdk/pci_event.o 00:15:06.529 CC lib/env_dpdk/sigbus_handler.o 00:15:06.529 CC lib/env_dpdk/pci_dpdk.o 00:15:06.529 SO libspdk_json.so.5.1 00:15:06.529 LIB libspdk_vmd.a 00:15:06.529 LIB libspdk_idxd.a 00:15:06.529 SO libspdk_idxd.so.11.0 00:15:06.529 SO libspdk_vmd.so.5.0 00:15:06.529 CC lib/env_dpdk/pci_dpdk_2207.o 00:15:06.529 SYMLINK libspdk_json.so 00:15:06.529 CC lib/env_dpdk/pci_dpdk_2211.o 00:15:06.529 SYMLINK libspdk_idxd.so 00:15:06.529 SYMLINK libspdk_vmd.so 00:15:06.787 CC lib/jsonrpc/jsonrpc_server.o 00:15:06.787 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:15:06.787 CC lib/jsonrpc/jsonrpc_client.o 00:15:06.787 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:15:07.046 LIB libspdk_jsonrpc.a 00:15:07.046 SO libspdk_jsonrpc.so.5.1 00:15:07.305 SYMLINK libspdk_jsonrpc.so 00:15:07.305 LIB libspdk_env_dpdk.a 00:15:07.305 CC lib/rpc/rpc.o 00:15:07.562 SO libspdk_env_dpdk.so.13.0 00:15:07.562 LIB libspdk_rpc.a 00:15:07.562 SYMLINK libspdk_env_dpdk.so 00:15:07.562 SO libspdk_rpc.so.5.0 00:15:07.820 SYMLINK libspdk_rpc.so 00:15:07.820 CC lib/trace/trace.o 00:15:07.820 CC lib/trace/trace_flags.o 00:15:07.820 CC lib/trace/trace_rpc.o 00:15:07.820 CC lib/notify/notify.o 00:15:07.820 CC lib/notify/notify_rpc.o 00:15:07.820 CC lib/sock/sock.o 00:15:07.820 CC lib/sock/sock_rpc.o 00:15:08.078 LIB libspdk_notify.a 00:15:08.078 SO libspdk_notify.so.5.0 00:15:08.078 LIB libspdk_trace.a 00:15:08.078 SYMLINK libspdk_notify.so 00:15:08.336 SO libspdk_trace.so.9.0 00:15:08.336 SYMLINK libspdk_trace.so 00:15:08.336 LIB libspdk_sock.a 00:15:08.336 SO libspdk_sock.so.8.0 00:15:08.594 SYMLINK libspdk_sock.so 00:15:08.595 CC lib/thread/thread.o 00:15:08.595 CC lib/thread/iobuf.o 00:15:08.595 CC lib/nvme/nvme_ctrlr_cmd.o 00:15:08.595 CC lib/nvme/nvme_ctrlr.o 00:15:08.595 CC lib/nvme/nvme_fabric.o 00:15:08.595 CC lib/nvme/nvme_ns_cmd.o 00:15:08.595 CC lib/nvme/nvme_ns.o 00:15:08.595 CC lib/nvme/nvme_pcie_common.o 00:15:08.595 CC lib/nvme/nvme_pcie.o 00:15:08.595 CC lib/nvme/nvme_qpair.o 00:15:08.974 CC lib/nvme/nvme.o 00:15:09.539 CC lib/nvme/nvme_quirks.o 00:15:09.539 CC lib/nvme/nvme_transport.o 00:15:09.539 CC lib/nvme/nvme_discovery.o 00:15:09.539 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:15:09.539 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:15:09.796 CC lib/nvme/nvme_tcp.o 00:15:09.796 CC lib/nvme/nvme_opal.o 00:15:09.796 CC lib/nvme/nvme_io_msg.o 00:15:10.054 CC lib/nvme/nvme_poll_group.o 00:15:10.054 CC lib/nvme/nvme_zns.o 00:15:10.311 CC lib/nvme/nvme_cuse.o 00:15:10.311 LIB libspdk_thread.a 00:15:10.311 CC lib/nvme/nvme_vfio_user.o 00:15:10.311 SO libspdk_thread.so.9.0 00:15:10.311 CC lib/nvme/nvme_rdma.o 00:15:10.311 SYMLINK libspdk_thread.so 00:15:10.311 CC lib/blob/blobstore.o 00:15:10.311 CC lib/accel/accel.o 00:15:10.569 CC lib/accel/accel_rpc.o 00:15:10.827 CC lib/accel/accel_sw.o 00:15:10.827 CC lib/blob/request.o 00:15:10.827 CC lib/init/json_config.o 00:15:10.827 CC lib/init/subsystem.o 00:15:10.827 CC lib/init/subsystem_rpc.o 00:15:11.084 CC lib/init/rpc.o 00:15:11.084 CC lib/blob/zeroes.o 00:15:11.084 CC lib/blob/blob_bs_dev.o 00:15:11.084 CC lib/virtio/virtio.o 00:15:11.084 CC lib/virtio/virtio_vhost_user.o 00:15:11.084 CC lib/virtio/virtio_vfio_user.o 00:15:11.342 CC lib/virtio/virtio_pci.o 00:15:11.342 LIB libspdk_init.a 00:15:11.342 SO libspdk_init.so.4.0 00:15:11.342 SYMLINK libspdk_init.so 00:15:11.601 LIB libspdk_accel.a 00:15:11.601 CC lib/event/app.o 00:15:11.601 CC lib/event/reactor.o 00:15:11.601 CC lib/event/log_rpc.o 00:15:11.601 CC lib/event/app_rpc.o 00:15:11.601 LIB libspdk_virtio.a 00:15:11.601 CC lib/event/scheduler_static.o 00:15:11.601 SO libspdk_accel.so.14.0 00:15:11.601 SO libspdk_virtio.so.6.0 00:15:11.601 SYMLINK libspdk_accel.so 00:15:11.601 SYMLINK libspdk_virtio.so 00:15:11.858 LIB libspdk_nvme.a 00:15:11.858 CC lib/bdev/bdev.o 00:15:11.858 CC lib/bdev/bdev_rpc.o 00:15:11.858 CC lib/bdev/bdev_zone.o 00:15:11.858 CC lib/bdev/part.o 00:15:11.858 CC lib/bdev/scsi_nvme.o 00:15:11.858 LIB libspdk_event.a 00:15:11.858 SO libspdk_nvme.so.12.0 00:15:12.115 SO libspdk_event.so.12.0 00:15:12.115 SYMLINK libspdk_event.so 00:15:12.372 SYMLINK libspdk_nvme.so 00:15:13.305 LIB libspdk_blob.a 00:15:13.305 SO libspdk_blob.so.10.1 00:15:13.564 SYMLINK libspdk_blob.so 00:15:13.564 CC lib/blobfs/blobfs.o 00:15:13.564 CC lib/blobfs/tree.o 00:15:13.564 CC lib/lvol/lvol.o 00:15:14.534 LIB libspdk_bdev.a 00:15:14.534 SO libspdk_bdev.so.14.0 00:15:14.534 LIB libspdk_blobfs.a 00:15:14.534 SO libspdk_blobfs.so.9.0 00:15:14.534 SYMLINK libspdk_bdev.so 00:15:14.534 LIB libspdk_lvol.a 00:15:14.534 SYMLINK libspdk_blobfs.so 00:15:14.534 SO libspdk_lvol.so.9.1 00:15:14.791 SYMLINK libspdk_lvol.so 00:15:14.791 CC lib/nvmf/ctrlr_discovery.o 00:15:14.791 CC lib/nvmf/ctrlr.o 00:15:14.791 CC lib/nvmf/subsystem.o 00:15:14.791 CC lib/nvmf/ctrlr_bdev.o 00:15:14.791 CC lib/scsi/dev.o 00:15:14.791 CC lib/nvmf/nvmf.o 00:15:14.791 CC lib/nbd/nbd.o 00:15:14.791 CC lib/scsi/lun.o 00:15:14.791 CC lib/ublk/ublk.o 00:15:14.791 CC lib/ftl/ftl_core.o 00:15:15.048 CC lib/ftl/ftl_init.o 00:15:15.048 CC lib/scsi/port.o 00:15:15.048 CC lib/ftl/ftl_layout.o 00:15:15.306 CC lib/nbd/nbd_rpc.o 00:15:15.306 CC lib/scsi/scsi.o 00:15:15.306 CC lib/ftl/ftl_debug.o 00:15:15.306 CC lib/nvmf/nvmf_rpc.o 00:15:15.306 LIB libspdk_nbd.a 00:15:15.306 CC lib/scsi/scsi_bdev.o 00:15:15.306 SO libspdk_nbd.so.6.0 00:15:15.306 CC lib/ublk/ublk_rpc.o 00:15:15.564 SYMLINK libspdk_nbd.so 00:15:15.564 CC lib/ftl/ftl_io.o 00:15:15.564 CC lib/scsi/scsi_pr.o 00:15:15.564 CC lib/ftl/ftl_sb.o 00:15:15.564 CC lib/nvmf/transport.o 00:15:15.564 LIB libspdk_ublk.a 00:15:15.564 SO libspdk_ublk.so.2.0 00:15:15.822 CC lib/nvmf/tcp.o 00:15:15.822 SYMLINK libspdk_ublk.so 00:15:15.822 CC lib/nvmf/rdma.o 00:15:15.822 CC lib/ftl/ftl_l2p.o 00:15:15.822 CC lib/scsi/scsi_rpc.o 00:15:15.822 CC lib/scsi/task.o 00:15:15.822 CC lib/ftl/ftl_l2p_flat.o 00:15:16.080 CC lib/ftl/ftl_nv_cache.o 00:15:16.080 CC lib/ftl/ftl_band.o 00:15:16.080 CC lib/ftl/ftl_band_ops.o 00:15:16.080 LIB libspdk_scsi.a 00:15:16.080 SO libspdk_scsi.so.8.0 00:15:16.080 CC lib/ftl/ftl_writer.o 00:15:16.080 CC lib/ftl/ftl_rq.o 00:15:16.080 CC lib/ftl/ftl_reloc.o 00:15:16.338 SYMLINK libspdk_scsi.so 00:15:16.338 CC lib/ftl/ftl_l2p_cache.o 00:15:16.338 CC lib/ftl/ftl_p2l.o 00:15:16.338 CC lib/ftl/mngt/ftl_mngt.o 00:15:16.338 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:15:16.338 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:15:16.338 CC lib/ftl/mngt/ftl_mngt_startup.o 00:15:16.596 CC lib/ftl/mngt/ftl_mngt_md.o 00:15:16.596 CC lib/ftl/mngt/ftl_mngt_misc.o 00:15:16.596 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:15:16.596 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:15:16.596 CC lib/ftl/mngt/ftl_mngt_band.o 00:15:16.596 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:15:16.854 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:15:16.854 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:15:16.854 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:15:16.854 CC lib/ftl/utils/ftl_conf.o 00:15:17.112 CC lib/iscsi/conn.o 00:15:17.112 CC lib/iscsi/init_grp.o 00:15:17.112 CC lib/ftl/utils/ftl_md.o 00:15:17.112 CC lib/vhost/vhost.o 00:15:17.112 CC lib/ftl/utils/ftl_mempool.o 00:15:17.112 CC lib/ftl/utils/ftl_bitmap.o 00:15:17.112 CC lib/iscsi/iscsi.o 00:15:17.369 CC lib/iscsi/md5.o 00:15:17.369 CC lib/iscsi/param.o 00:15:17.369 CC lib/vhost/vhost_rpc.o 00:15:17.369 CC lib/iscsi/portal_grp.o 00:15:17.369 CC lib/iscsi/tgt_node.o 00:15:17.369 CC lib/iscsi/iscsi_subsystem.o 00:15:17.628 CC lib/ftl/utils/ftl_property.o 00:15:17.628 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:15:17.628 CC lib/iscsi/iscsi_rpc.o 00:15:17.628 CC lib/vhost/vhost_scsi.o 00:15:17.885 CC lib/iscsi/task.o 00:15:17.885 CC lib/vhost/vhost_blk.o 00:15:17.885 CC lib/vhost/rte_vhost_user.o 00:15:17.885 LIB libspdk_nvmf.a 00:15:17.885 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:15:17.885 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:15:17.885 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:15:17.885 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:15:17.885 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:15:17.885 SO libspdk_nvmf.so.17.0 00:15:18.143 CC lib/ftl/upgrade/ftl_sb_v3.o 00:15:18.143 CC lib/ftl/upgrade/ftl_sb_v5.o 00:15:18.143 CC lib/ftl/nvc/ftl_nvc_dev.o 00:15:18.143 SYMLINK libspdk_nvmf.so 00:15:18.143 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:15:18.143 CC lib/ftl/base/ftl_base_dev.o 00:15:18.143 CC lib/ftl/base/ftl_base_bdev.o 00:15:18.410 CC lib/ftl/ftl_trace.o 00:15:18.410 LIB libspdk_iscsi.a 00:15:18.682 SO libspdk_iscsi.so.7.0 00:15:18.682 LIB libspdk_ftl.a 00:15:18.682 SYMLINK libspdk_iscsi.so 00:15:18.940 SO libspdk_ftl.so.8.0 00:15:18.941 LIB libspdk_vhost.a 00:15:18.941 SO libspdk_vhost.so.7.1 00:15:19.198 SYMLINK libspdk_vhost.so 00:15:19.198 SYMLINK libspdk_ftl.so 00:15:19.456 CC module/env_dpdk/env_dpdk_rpc.o 00:15:19.456 CC module/sock/posix/posix.o 00:15:19.456 CC module/blob/bdev/blob_bdev.o 00:15:19.456 CC module/scheduler/dynamic/scheduler_dynamic.o 00:15:19.456 CC module/sock/uring/uring.o 00:15:19.456 CC module/scheduler/gscheduler/gscheduler.o 00:15:19.456 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:15:19.456 CC module/accel/ioat/accel_ioat.o 00:15:19.456 CC module/accel/dsa/accel_dsa.o 00:15:19.456 CC module/accel/error/accel_error.o 00:15:19.714 LIB libspdk_env_dpdk_rpc.a 00:15:19.714 SO libspdk_env_dpdk_rpc.so.5.0 00:15:19.714 LIB libspdk_scheduler_dpdk_governor.a 00:15:19.714 SO libspdk_scheduler_dpdk_governor.so.3.0 00:15:19.714 LIB libspdk_scheduler_gscheduler.a 00:15:19.714 CC module/accel/error/accel_error_rpc.o 00:15:19.714 SYMLINK libspdk_env_dpdk_rpc.so 00:15:19.714 SO libspdk_scheduler_gscheduler.so.3.0 00:15:19.714 CC module/accel/ioat/accel_ioat_rpc.o 00:15:19.714 LIB libspdk_scheduler_dynamic.a 00:15:19.714 SYMLINK libspdk_scheduler_dpdk_governor.so 00:15:19.714 CC module/accel/dsa/accel_dsa_rpc.o 00:15:19.714 SO libspdk_scheduler_dynamic.so.3.0 00:15:19.973 LIB libspdk_blob_bdev.a 00:15:19.973 SYMLINK libspdk_scheduler_gscheduler.so 00:15:19.973 SYMLINK libspdk_scheduler_dynamic.so 00:15:19.973 SO libspdk_blob_bdev.so.10.1 00:15:19.973 LIB libspdk_accel_error.a 00:15:19.973 LIB libspdk_accel_ioat.a 00:15:19.973 CC module/accel/iaa/accel_iaa_rpc.o 00:15:19.973 CC module/accel/iaa/accel_iaa.o 00:15:19.973 SYMLINK libspdk_blob_bdev.so 00:15:19.973 LIB libspdk_accel_dsa.a 00:15:19.973 SO libspdk_accel_ioat.so.5.0 00:15:19.973 SO libspdk_accel_error.so.1.0 00:15:19.973 SO libspdk_accel_dsa.so.4.0 00:15:19.973 SYMLINK libspdk_accel_error.so 00:15:19.973 SYMLINK libspdk_accel_ioat.so 00:15:19.973 SYMLINK libspdk_accel_dsa.so 00:15:20.232 CC module/bdev/delay/vbdev_delay.o 00:15:20.232 CC module/bdev/lvol/vbdev_lvol.o 00:15:20.232 CC module/blobfs/bdev/blobfs_bdev.o 00:15:20.232 CC module/bdev/error/vbdev_error.o 00:15:20.232 CC module/bdev/gpt/gpt.o 00:15:20.232 LIB libspdk_accel_iaa.a 00:15:20.232 CC module/bdev/malloc/bdev_malloc.o 00:15:20.232 SO libspdk_accel_iaa.so.2.0 00:15:20.232 LIB libspdk_sock_uring.a 00:15:20.232 CC module/bdev/null/bdev_null.o 00:15:20.232 LIB libspdk_sock_posix.a 00:15:20.232 SO libspdk_sock_uring.so.4.0 00:15:20.232 SO libspdk_sock_posix.so.5.0 00:15:20.232 SYMLINK libspdk_accel_iaa.so 00:15:20.232 CC module/bdev/null/bdev_null_rpc.o 00:15:20.232 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:15:20.490 SYMLINK libspdk_sock_uring.so 00:15:20.490 CC module/bdev/gpt/vbdev_gpt.o 00:15:20.490 SYMLINK libspdk_sock_posix.so 00:15:20.490 CC module/bdev/error/vbdev_error_rpc.o 00:15:20.490 CC module/bdev/delay/vbdev_delay_rpc.o 00:15:20.490 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:15:20.490 CC module/bdev/malloc/bdev_malloc_rpc.o 00:15:20.490 LIB libspdk_bdev_null.a 00:15:20.490 CC module/bdev/nvme/bdev_nvme.o 00:15:20.749 CC module/bdev/passthru/vbdev_passthru.o 00:15:20.749 LIB libspdk_blobfs_bdev.a 00:15:20.749 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:15:20.749 SO libspdk_bdev_null.so.5.0 00:15:20.749 SO libspdk_blobfs_bdev.so.5.0 00:15:20.749 LIB libspdk_bdev_error.a 00:15:20.749 LIB libspdk_bdev_gpt.a 00:15:20.749 SO libspdk_bdev_error.so.5.0 00:15:20.749 SO libspdk_bdev_gpt.so.5.0 00:15:20.749 LIB libspdk_bdev_delay.a 00:15:20.749 SYMLINK libspdk_bdev_null.so 00:15:20.749 SO libspdk_bdev_delay.so.5.0 00:15:20.749 SYMLINK libspdk_blobfs_bdev.so 00:15:20.749 LIB libspdk_bdev_malloc.a 00:15:20.749 SYMLINK libspdk_bdev_error.so 00:15:20.749 SYMLINK libspdk_bdev_gpt.so 00:15:20.749 SYMLINK libspdk_bdev_delay.so 00:15:20.749 SO libspdk_bdev_malloc.so.5.0 00:15:20.749 LIB libspdk_bdev_lvol.a 00:15:21.007 CC module/bdev/raid/bdev_raid.o 00:15:21.007 CC module/bdev/zone_block/vbdev_zone_block.o 00:15:21.007 SYMLINK libspdk_bdev_malloc.so 00:15:21.007 SO libspdk_bdev_lvol.so.5.0 00:15:21.007 CC module/bdev/split/vbdev_split.o 00:15:21.007 CC module/bdev/uring/bdev_uring.o 00:15:21.007 LIB libspdk_bdev_passthru.a 00:15:21.007 CC module/bdev/aio/bdev_aio.o 00:15:21.007 CC module/bdev/ftl/bdev_ftl.o 00:15:21.007 SO libspdk_bdev_passthru.so.5.0 00:15:21.007 SYMLINK libspdk_bdev_lvol.so 00:15:21.007 CC module/bdev/iscsi/bdev_iscsi.o 00:15:21.007 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:15:21.007 SYMLINK libspdk_bdev_passthru.so 00:15:21.007 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:15:21.265 CC module/bdev/split/vbdev_split_rpc.o 00:15:21.265 CC module/bdev/ftl/bdev_ftl_rpc.o 00:15:21.265 CC module/bdev/raid/bdev_raid_rpc.o 00:15:21.265 LIB libspdk_bdev_zone_block.a 00:15:21.265 CC module/bdev/raid/bdev_raid_sb.o 00:15:21.265 CC module/bdev/aio/bdev_aio_rpc.o 00:15:21.265 SO libspdk_bdev_zone_block.so.5.0 00:15:21.265 LIB libspdk_bdev_split.a 00:15:21.265 CC module/bdev/uring/bdev_uring_rpc.o 00:15:21.265 SYMLINK libspdk_bdev_zone_block.so 00:15:21.265 SO libspdk_bdev_split.so.5.0 00:15:21.522 LIB libspdk_bdev_iscsi.a 00:15:21.522 SO libspdk_bdev_iscsi.so.5.0 00:15:21.522 LIB libspdk_bdev_ftl.a 00:15:21.522 SYMLINK libspdk_bdev_split.so 00:15:21.522 LIB libspdk_bdev_aio.a 00:15:21.522 CC module/bdev/raid/raid0.o 00:15:21.522 CC module/bdev/raid/raid1.o 00:15:21.522 SO libspdk_bdev_ftl.so.5.0 00:15:21.522 CC module/bdev/raid/concat.o 00:15:21.522 SYMLINK libspdk_bdev_iscsi.so 00:15:21.522 CC module/bdev/virtio/bdev_virtio_scsi.o 00:15:21.522 SO libspdk_bdev_aio.so.5.0 00:15:21.522 LIB libspdk_bdev_uring.a 00:15:21.522 SYMLINK libspdk_bdev_ftl.so 00:15:21.522 CC module/bdev/nvme/bdev_nvme_rpc.o 00:15:21.522 CC module/bdev/virtio/bdev_virtio_blk.o 00:15:21.522 SO libspdk_bdev_uring.so.5.0 00:15:21.522 SYMLINK libspdk_bdev_aio.so 00:15:21.522 CC module/bdev/virtio/bdev_virtio_rpc.o 00:15:21.522 SYMLINK libspdk_bdev_uring.so 00:15:21.522 CC module/bdev/nvme/nvme_rpc.o 00:15:21.779 CC module/bdev/nvme/bdev_mdns_client.o 00:15:21.779 CC module/bdev/nvme/vbdev_opal.o 00:15:21.779 CC module/bdev/nvme/vbdev_opal_rpc.o 00:15:21.779 LIB libspdk_bdev_raid.a 00:15:21.779 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:15:21.779 SO libspdk_bdev_raid.so.5.0 00:15:22.037 SYMLINK libspdk_bdev_raid.so 00:15:22.037 LIB libspdk_bdev_virtio.a 00:15:22.037 SO libspdk_bdev_virtio.so.5.0 00:15:22.294 SYMLINK libspdk_bdev_virtio.so 00:15:22.878 LIB libspdk_bdev_nvme.a 00:15:22.878 SO libspdk_bdev_nvme.so.6.0 00:15:23.166 SYMLINK libspdk_bdev_nvme.so 00:15:23.425 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:15:23.425 CC module/event/subsystems/sock/sock.o 00:15:23.425 CC module/event/subsystems/vmd/vmd.o 00:15:23.425 CC module/event/subsystems/vmd/vmd_rpc.o 00:15:23.425 CC module/event/subsystems/iobuf/iobuf.o 00:15:23.425 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:15:23.425 CC module/event/subsystems/scheduler/scheduler.o 00:15:23.684 LIB libspdk_event_iobuf.a 00:15:23.684 LIB libspdk_event_vhost_blk.a 00:15:23.684 LIB libspdk_event_vmd.a 00:15:23.684 LIB libspdk_event_sock.a 00:15:23.684 SO libspdk_event_vhost_blk.so.2.0 00:15:23.684 SO libspdk_event_iobuf.so.2.0 00:15:23.684 LIB libspdk_event_scheduler.a 00:15:23.684 SO libspdk_event_vmd.so.5.0 00:15:23.684 SO libspdk_event_sock.so.4.0 00:15:23.684 SO libspdk_event_scheduler.so.3.0 00:15:23.684 SYMLINK libspdk_event_vhost_blk.so 00:15:23.684 SYMLINK libspdk_event_iobuf.so 00:15:23.684 SYMLINK libspdk_event_sock.so 00:15:23.684 SYMLINK libspdk_event_vmd.so 00:15:23.684 SYMLINK libspdk_event_scheduler.so 00:15:23.942 CC module/event/subsystems/accel/accel.o 00:15:23.942 LIB libspdk_event_accel.a 00:15:24.201 SO libspdk_event_accel.so.5.0 00:15:24.201 SYMLINK libspdk_event_accel.so 00:15:24.460 CC module/event/subsystems/bdev/bdev.o 00:15:24.460 LIB libspdk_event_bdev.a 00:15:24.460 SO libspdk_event_bdev.so.5.0 00:15:24.718 SYMLINK libspdk_event_bdev.so 00:15:24.718 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:15:24.718 CC module/event/subsystems/nbd/nbd.o 00:15:24.718 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:15:24.718 CC module/event/subsystems/scsi/scsi.o 00:15:24.718 CC module/event/subsystems/ublk/ublk.o 00:15:24.976 LIB libspdk_event_nbd.a 00:15:24.976 SO libspdk_event_nbd.so.5.0 00:15:24.976 LIB libspdk_event_ublk.a 00:15:24.976 LIB libspdk_event_nvmf.a 00:15:24.976 SO libspdk_event_ublk.so.2.0 00:15:24.976 LIB libspdk_event_scsi.a 00:15:24.976 SYMLINK libspdk_event_nbd.so 00:15:24.976 SO libspdk_event_nvmf.so.5.0 00:15:24.976 SYMLINK libspdk_event_ublk.so 00:15:24.976 SO libspdk_event_scsi.so.5.0 00:15:25.234 SYMLINK libspdk_event_nvmf.so 00:15:25.234 SYMLINK libspdk_event_scsi.so 00:15:25.234 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:15:25.234 CC module/event/subsystems/iscsi/iscsi.o 00:15:25.491 LIB libspdk_event_vhost_scsi.a 00:15:25.491 LIB libspdk_event_iscsi.a 00:15:25.491 SO libspdk_event_vhost_scsi.so.2.0 00:15:25.491 SO libspdk_event_iscsi.so.5.0 00:15:25.491 SYMLINK libspdk_event_iscsi.so 00:15:25.491 SYMLINK libspdk_event_vhost_scsi.so 00:15:25.748 SO libspdk.so.5.0 00:15:25.748 SYMLINK libspdk.so 00:15:25.748 CC app/trace_record/trace_record.o 00:15:25.748 CXX app/trace/trace.o 00:15:25.748 CC app/spdk_nvme_identify/identify.o 00:15:25.748 CC app/spdk_nvme_perf/perf.o 00:15:25.748 CC app/spdk_lspci/spdk_lspci.o 00:15:25.748 CC app/iscsi_tgt/iscsi_tgt.o 00:15:26.006 CC app/nvmf_tgt/nvmf_main.o 00:15:26.006 CC examples/accel/perf/accel_perf.o 00:15:26.006 CC app/spdk_tgt/spdk_tgt.o 00:15:26.006 LINK spdk_lspci 00:15:26.006 CC test/accel/dif/dif.o 00:15:26.006 LINK spdk_trace_record 00:15:26.006 LINK nvmf_tgt 00:15:26.264 LINK iscsi_tgt 00:15:26.264 LINK spdk_tgt 00:15:26.264 LINK spdk_trace 00:15:26.264 CC test/app/bdev_svc/bdev_svc.o 00:15:26.522 CC test/bdev/bdevio/bdevio.o 00:15:26.522 LINK dif 00:15:26.522 LINK accel_perf 00:15:26.522 CC app/spdk_nvme_discover/discovery_aer.o 00:15:26.522 CC app/spdk_top/spdk_top.o 00:15:26.522 CC examples/bdev/hello_world/hello_bdev.o 00:15:26.522 LINK bdev_svc 00:15:26.522 CC examples/blob/hello_world/hello_blob.o 00:15:26.522 LINK spdk_nvme_identify 00:15:26.780 LINK spdk_nvme_discover 00:15:26.780 CC examples/bdev/bdevperf/bdevperf.o 00:15:26.780 CC examples/ioat/perf/perf.o 00:15:26.780 LINK spdk_nvme_perf 00:15:26.780 LINK hello_bdev 00:15:26.780 LINK bdevio 00:15:26.780 CC app/vhost/vhost.o 00:15:27.039 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:15:27.039 LINK hello_blob 00:15:27.039 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:15:27.039 LINK ioat_perf 00:15:27.039 CC examples/blob/cli/blobcli.o 00:15:27.039 LINK vhost 00:15:27.039 CC app/spdk_dd/spdk_dd.o 00:15:27.297 CC app/fio/nvme/fio_plugin.o 00:15:27.297 CC app/fio/bdev/fio_plugin.o 00:15:27.297 CC examples/ioat/verify/verify.o 00:15:27.297 LINK spdk_top 00:15:27.297 LINK nvme_fuzz 00:15:27.566 CC examples/nvme/hello_world/hello_world.o 00:15:27.566 LINK spdk_dd 00:15:27.566 LINK bdevperf 00:15:27.566 LINK verify 00:15:27.566 LINK blobcli 00:15:27.566 CC examples/nvme/reconnect/reconnect.o 00:15:27.566 CC examples/nvme/nvme_manage/nvme_manage.o 00:15:27.566 LINK hello_world 00:15:27.869 LINK spdk_bdev 00:15:27.869 CC examples/nvme/arbitration/arbitration.o 00:15:27.869 CC examples/nvme/hotplug/hotplug.o 00:15:27.869 CC examples/nvme/cmb_copy/cmb_copy.o 00:15:27.869 LINK spdk_nvme 00:15:27.869 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:15:27.869 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:15:27.869 CC examples/sock/hello_world/hello_sock.o 00:15:27.869 LINK reconnect 00:15:28.127 LINK cmb_copy 00:15:28.127 CC examples/vmd/lsvmd/lsvmd.o 00:15:28.127 LINK hotplug 00:15:28.127 CC examples/vmd/led/led.o 00:15:28.127 LINK nvme_manage 00:15:28.127 LINK arbitration 00:15:28.127 LINK hello_sock 00:15:28.127 LINK lsvmd 00:15:28.385 LINK led 00:15:28.385 CC examples/nvme/abort/abort.o 00:15:28.385 CC examples/nvmf/nvmf/nvmf.o 00:15:28.385 LINK vhost_fuzz 00:15:28.385 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:15:28.385 CC examples/util/zipf/zipf.o 00:15:28.385 CC test/app/histogram_perf/histogram_perf.o 00:15:28.385 CC examples/thread/thread/thread_ex.o 00:15:28.644 CC test/blobfs/mkfs/mkfs.o 00:15:28.644 LINK pmr_persistence 00:15:28.644 LINK zipf 00:15:28.644 CC examples/interrupt_tgt/interrupt_tgt.o 00:15:28.644 CC examples/idxd/perf/perf.o 00:15:28.644 LINK iscsi_fuzz 00:15:28.644 LINK nvmf 00:15:28.644 LINK histogram_perf 00:15:28.644 LINK abort 00:15:28.644 LINK mkfs 00:15:28.644 LINK thread 00:15:28.902 LINK interrupt_tgt 00:15:28.902 CC test/app/jsoncat/jsoncat.o 00:15:28.902 CC test/app/stub/stub.o 00:15:28.902 TEST_HEADER include/spdk/accel.h 00:15:28.902 TEST_HEADER include/spdk/accel_module.h 00:15:28.902 TEST_HEADER include/spdk/assert.h 00:15:28.902 TEST_HEADER include/spdk/barrier.h 00:15:28.902 TEST_HEADER include/spdk/base64.h 00:15:28.902 TEST_HEADER include/spdk/bdev.h 00:15:28.902 TEST_HEADER include/spdk/bdev_module.h 00:15:28.902 TEST_HEADER include/spdk/bdev_zone.h 00:15:28.902 TEST_HEADER include/spdk/bit_array.h 00:15:28.902 TEST_HEADER include/spdk/bit_pool.h 00:15:28.902 TEST_HEADER include/spdk/blob_bdev.h 00:15:28.902 TEST_HEADER include/spdk/blobfs_bdev.h 00:15:28.902 TEST_HEADER include/spdk/blobfs.h 00:15:28.902 TEST_HEADER include/spdk/blob.h 00:15:28.902 TEST_HEADER include/spdk/conf.h 00:15:28.902 TEST_HEADER include/spdk/config.h 00:15:28.902 TEST_HEADER include/spdk/cpuset.h 00:15:28.902 TEST_HEADER include/spdk/crc16.h 00:15:28.902 TEST_HEADER include/spdk/crc32.h 00:15:28.902 TEST_HEADER include/spdk/crc64.h 00:15:28.902 TEST_HEADER include/spdk/dif.h 00:15:28.902 TEST_HEADER include/spdk/dma.h 00:15:28.902 TEST_HEADER include/spdk/endian.h 00:15:28.902 TEST_HEADER include/spdk/env_dpdk.h 00:15:28.902 TEST_HEADER include/spdk/env.h 00:15:28.902 TEST_HEADER include/spdk/event.h 00:15:28.902 TEST_HEADER include/spdk/fd_group.h 00:15:28.902 TEST_HEADER include/spdk/fd.h 00:15:28.902 TEST_HEADER include/spdk/file.h 00:15:28.902 LINK jsoncat 00:15:28.902 TEST_HEADER include/spdk/ftl.h 00:15:28.902 LINK idxd_perf 00:15:28.902 TEST_HEADER include/spdk/gpt_spec.h 00:15:28.902 TEST_HEADER include/spdk/hexlify.h 00:15:28.902 TEST_HEADER include/spdk/histogram_data.h 00:15:28.902 TEST_HEADER include/spdk/idxd.h 00:15:28.902 TEST_HEADER include/spdk/idxd_spec.h 00:15:28.902 TEST_HEADER include/spdk/init.h 00:15:28.902 TEST_HEADER include/spdk/ioat.h 00:15:28.902 TEST_HEADER include/spdk/ioat_spec.h 00:15:28.902 CC test/dma/test_dma/test_dma.o 00:15:28.902 TEST_HEADER include/spdk/iscsi_spec.h 00:15:28.902 TEST_HEADER include/spdk/json.h 00:15:28.902 TEST_HEADER include/spdk/jsonrpc.h 00:15:28.902 TEST_HEADER include/spdk/likely.h 00:15:28.902 TEST_HEADER include/spdk/log.h 00:15:28.902 TEST_HEADER include/spdk/lvol.h 00:15:28.902 TEST_HEADER include/spdk/memory.h 00:15:28.902 TEST_HEADER include/spdk/mmio.h 00:15:28.902 TEST_HEADER include/spdk/nbd.h 00:15:28.902 TEST_HEADER include/spdk/notify.h 00:15:28.902 TEST_HEADER include/spdk/nvme.h 00:15:28.902 TEST_HEADER include/spdk/nvme_intel.h 00:15:28.902 TEST_HEADER include/spdk/nvme_ocssd.h 00:15:28.902 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:15:28.902 TEST_HEADER include/spdk/nvme_spec.h 00:15:28.902 TEST_HEADER include/spdk/nvme_zns.h 00:15:28.902 LINK stub 00:15:28.902 TEST_HEADER include/spdk/nvmf_cmd.h 00:15:28.902 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:15:28.902 TEST_HEADER include/spdk/nvmf.h 00:15:28.902 CC test/event/reactor/reactor.o 00:15:28.902 CC test/event/event_perf/event_perf.o 00:15:28.902 TEST_HEADER include/spdk/nvmf_spec.h 00:15:29.160 CC test/env/mem_callbacks/mem_callbacks.o 00:15:29.160 TEST_HEADER include/spdk/nvmf_transport.h 00:15:29.160 TEST_HEADER include/spdk/opal.h 00:15:29.160 TEST_HEADER include/spdk/opal_spec.h 00:15:29.160 TEST_HEADER include/spdk/pci_ids.h 00:15:29.160 TEST_HEADER include/spdk/pipe.h 00:15:29.160 CC test/env/vtophys/vtophys.o 00:15:29.160 TEST_HEADER include/spdk/queue.h 00:15:29.160 TEST_HEADER include/spdk/reduce.h 00:15:29.160 TEST_HEADER include/spdk/rpc.h 00:15:29.160 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:15:29.160 TEST_HEADER include/spdk/scheduler.h 00:15:29.160 TEST_HEADER include/spdk/scsi.h 00:15:29.160 TEST_HEADER include/spdk/scsi_spec.h 00:15:29.160 TEST_HEADER include/spdk/sock.h 00:15:29.160 TEST_HEADER include/spdk/stdinc.h 00:15:29.160 TEST_HEADER include/spdk/string.h 00:15:29.160 TEST_HEADER include/spdk/thread.h 00:15:29.160 TEST_HEADER include/spdk/trace.h 00:15:29.160 TEST_HEADER include/spdk/trace_parser.h 00:15:29.160 TEST_HEADER include/spdk/tree.h 00:15:29.160 TEST_HEADER include/spdk/ublk.h 00:15:29.160 TEST_HEADER include/spdk/util.h 00:15:29.160 TEST_HEADER include/spdk/uuid.h 00:15:29.160 TEST_HEADER include/spdk/version.h 00:15:29.160 TEST_HEADER include/spdk/vfio_user_pci.h 00:15:29.160 TEST_HEADER include/spdk/vfio_user_spec.h 00:15:29.160 TEST_HEADER include/spdk/vhost.h 00:15:29.160 TEST_HEADER include/spdk/vmd.h 00:15:29.160 TEST_HEADER include/spdk/xor.h 00:15:29.160 TEST_HEADER include/spdk/zipf.h 00:15:29.160 CXX test/cpp_headers/accel.o 00:15:29.160 CC test/event/reactor_perf/reactor_perf.o 00:15:29.160 CXX test/cpp_headers/accel_module.o 00:15:29.160 CC test/event/app_repeat/app_repeat.o 00:15:29.160 LINK event_perf 00:15:29.160 LINK reactor 00:15:29.160 LINK vtophys 00:15:29.160 LINK env_dpdk_post_init 00:15:29.419 LINK reactor_perf 00:15:29.419 LINK test_dma 00:15:29.419 CXX test/cpp_headers/assert.o 00:15:29.419 LINK app_repeat 00:15:29.419 CC test/env/memory/memory_ut.o 00:15:29.419 CC test/env/pci/pci_ut.o 00:15:29.419 CC test/event/scheduler/scheduler.o 00:15:29.419 CC test/lvol/esnap/esnap.o 00:15:29.419 CC test/nvme/aer/aer.o 00:15:29.677 CC test/nvme/reset/reset.o 00:15:29.677 CXX test/cpp_headers/barrier.o 00:15:29.677 CXX test/cpp_headers/base64.o 00:15:29.677 CC test/rpc_client/rpc_client_test.o 00:15:29.677 LINK mem_callbacks 00:15:29.677 LINK scheduler 00:15:29.677 CXX test/cpp_headers/bdev.o 00:15:29.677 LINK aer 00:15:29.935 CC test/nvme/sgl/sgl.o 00:15:29.935 LINK rpc_client_test 00:15:29.935 LINK pci_ut 00:15:29.935 LINK reset 00:15:29.935 CXX test/cpp_headers/bdev_module.o 00:15:29.935 CXX test/cpp_headers/bdev_zone.o 00:15:29.935 CC test/thread/poller_perf/poller_perf.o 00:15:29.935 CC test/nvme/e2edp/nvme_dp.o 00:15:30.193 LINK sgl 00:15:30.193 CC test/nvme/err_injection/err_injection.o 00:15:30.193 CC test/nvme/overhead/overhead.o 00:15:30.193 CXX test/cpp_headers/bit_array.o 00:15:30.193 LINK poller_perf 00:15:30.193 CXX test/cpp_headers/bit_pool.o 00:15:30.193 CC test/nvme/startup/startup.o 00:15:30.193 LINK err_injection 00:15:30.193 LINK nvme_dp 00:15:30.193 CC test/nvme/reserve/reserve.o 00:15:30.451 CXX test/cpp_headers/blob_bdev.o 00:15:30.451 CC test/nvme/simple_copy/simple_copy.o 00:15:30.451 CC test/nvme/connect_stress/connect_stress.o 00:15:30.451 LINK startup 00:15:30.451 LINK overhead 00:15:30.451 LINK memory_ut 00:15:30.451 CC test/nvme/boot_partition/boot_partition.o 00:15:30.451 CC test/nvme/compliance/nvme_compliance.o 00:15:30.451 LINK reserve 00:15:30.710 CXX test/cpp_headers/blobfs_bdev.o 00:15:30.710 LINK connect_stress 00:15:30.710 LINK simple_copy 00:15:30.710 LINK boot_partition 00:15:30.710 CC test/nvme/doorbell_aers/doorbell_aers.o 00:15:30.710 CXX test/cpp_headers/blobfs.o 00:15:30.710 CC test/nvme/fused_ordering/fused_ordering.o 00:15:30.710 CC test/nvme/fdp/fdp.o 00:15:30.710 CXX test/cpp_headers/blob.o 00:15:30.710 CXX test/cpp_headers/conf.o 00:15:30.710 CXX test/cpp_headers/config.o 00:15:30.710 LINK nvme_compliance 00:15:30.967 CXX test/cpp_headers/cpuset.o 00:15:30.967 CXX test/cpp_headers/crc16.o 00:15:30.967 LINK doorbell_aers 00:15:30.967 LINK fused_ordering 00:15:30.967 CC test/nvme/cuse/cuse.o 00:15:30.967 CXX test/cpp_headers/crc32.o 00:15:30.967 CXX test/cpp_headers/crc64.o 00:15:30.967 CXX test/cpp_headers/dif.o 00:15:30.967 CXX test/cpp_headers/dma.o 00:15:30.967 CXX test/cpp_headers/endian.o 00:15:30.967 CXX test/cpp_headers/env_dpdk.o 00:15:30.967 LINK fdp 00:15:30.967 CXX test/cpp_headers/env.o 00:15:31.225 CXX test/cpp_headers/event.o 00:15:31.225 CXX test/cpp_headers/fd_group.o 00:15:31.225 CXX test/cpp_headers/fd.o 00:15:31.225 CXX test/cpp_headers/file.o 00:15:31.225 CXX test/cpp_headers/ftl.o 00:15:31.225 CXX test/cpp_headers/gpt_spec.o 00:15:31.225 CXX test/cpp_headers/hexlify.o 00:15:31.225 CXX test/cpp_headers/histogram_data.o 00:15:31.225 CXX test/cpp_headers/idxd.o 00:15:31.225 CXX test/cpp_headers/idxd_spec.o 00:15:31.225 CXX test/cpp_headers/init.o 00:15:31.483 CXX test/cpp_headers/ioat.o 00:15:31.483 CXX test/cpp_headers/ioat_spec.o 00:15:31.483 CXX test/cpp_headers/iscsi_spec.o 00:15:31.483 CXX test/cpp_headers/json.o 00:15:31.483 CXX test/cpp_headers/jsonrpc.o 00:15:31.483 CXX test/cpp_headers/likely.o 00:15:31.483 CXX test/cpp_headers/log.o 00:15:31.483 CXX test/cpp_headers/lvol.o 00:15:31.483 CXX test/cpp_headers/memory.o 00:15:31.483 CXX test/cpp_headers/mmio.o 00:15:31.483 CXX test/cpp_headers/nbd.o 00:15:31.742 CXX test/cpp_headers/notify.o 00:15:31.742 CXX test/cpp_headers/nvme.o 00:15:31.742 CXX test/cpp_headers/nvme_intel.o 00:15:31.742 CXX test/cpp_headers/nvme_ocssd.o 00:15:31.742 CXX test/cpp_headers/nvme_ocssd_spec.o 00:15:31.742 CXX test/cpp_headers/nvme_spec.o 00:15:31.742 CXX test/cpp_headers/nvme_zns.o 00:15:31.742 CXX test/cpp_headers/nvmf_cmd.o 00:15:31.742 CXX test/cpp_headers/nvmf_fc_spec.o 00:15:31.742 CXX test/cpp_headers/nvmf.o 00:15:32.000 CXX test/cpp_headers/nvmf_spec.o 00:15:32.000 CXX test/cpp_headers/nvmf_transport.o 00:15:32.000 CXX test/cpp_headers/opal.o 00:15:32.000 CXX test/cpp_headers/opal_spec.o 00:15:32.000 LINK cuse 00:15:32.000 CXX test/cpp_headers/pci_ids.o 00:15:32.000 CXX test/cpp_headers/pipe.o 00:15:32.000 CXX test/cpp_headers/queue.o 00:15:32.000 CXX test/cpp_headers/reduce.o 00:15:32.000 CXX test/cpp_headers/rpc.o 00:15:32.000 CXX test/cpp_headers/scheduler.o 00:15:32.000 CXX test/cpp_headers/scsi.o 00:15:32.000 CXX test/cpp_headers/scsi_spec.o 00:15:32.257 CXX test/cpp_headers/sock.o 00:15:32.257 CXX test/cpp_headers/stdinc.o 00:15:32.257 CXX test/cpp_headers/string.o 00:15:32.257 CXX test/cpp_headers/thread.o 00:15:32.257 CXX test/cpp_headers/trace.o 00:15:32.257 CXX test/cpp_headers/trace_parser.o 00:15:32.257 CXX test/cpp_headers/tree.o 00:15:32.257 CXX test/cpp_headers/ublk.o 00:15:32.257 CXX test/cpp_headers/util.o 00:15:32.257 CXX test/cpp_headers/uuid.o 00:15:32.257 CXX test/cpp_headers/version.o 00:15:32.257 CXX test/cpp_headers/vfio_user_pci.o 00:15:32.257 CXX test/cpp_headers/vfio_user_spec.o 00:15:32.257 CXX test/cpp_headers/vhost.o 00:15:32.514 CXX test/cpp_headers/vmd.o 00:15:32.514 CXX test/cpp_headers/xor.o 00:15:32.514 CXX test/cpp_headers/zipf.o 00:15:34.436 LINK esnap 00:15:34.694 00:15:34.694 real 0m53.740s 00:15:34.694 user 4m56.493s 00:15:34.694 sys 1m3.525s 00:15:34.694 21:28:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:15:34.695 21:28:55 -- common/autotest_common.sh@10 -- $ set +x 00:15:34.695 ************************************ 00:15:34.695 END TEST make 00:15:34.695 ************************************ 00:15:34.695 21:28:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.695 21:28:55 -- nvmf/common.sh@7 -- # uname -s 00:15:34.695 21:28:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.695 21:28:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.695 21:28:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.695 21:28:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.695 21:28:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.695 21:28:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.695 21:28:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.695 21:28:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.695 21:28:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.695 21:28:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.695 21:28:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:15:34.695 21:28:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:15:34.695 21:28:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.695 21:28:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.695 21:28:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.695 21:28:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.695 21:28:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.695 21:28:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.695 21:28:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.695 21:28:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.695 21:28:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.695 21:28:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.695 21:28:55 -- paths/export.sh@5 -- # export PATH 00:15:34.695 21:28:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.695 21:28:55 -- nvmf/common.sh@46 -- # : 0 00:15:34.695 21:28:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:34.695 21:28:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:34.695 21:28:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:34.695 21:28:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.695 21:28:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.695 21:28:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:34.695 21:28:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:34.695 21:28:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:34.695 21:28:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:15:34.695 21:28:55 -- spdk/autotest.sh@32 -- # uname -s 00:15:34.695 21:28:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:15:34.695 21:28:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:15:34.695 21:28:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:34.695 21:28:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:15:34.695 21:28:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:34.695 21:28:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:15:34.954 21:28:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:15:34.954 21:28:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:15:34.954 21:28:55 -- spdk/autotest.sh@48 -- # udevadm_pid=60098 00:15:34.954 21:28:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:15:34.954 21:28:55 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:15:34.954 21:28:55 -- spdk/autotest.sh@54 -- # echo 60107 00:15:34.954 21:28:55 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:15:34.954 21:28:55 -- spdk/autotest.sh@56 -- # echo 60108 00:15:34.954 21:28:55 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:15:34.954 21:28:55 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:15:34.954 21:28:55 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:15:34.954 21:28:55 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:15:34.954 21:28:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:34.954 21:28:55 -- common/autotest_common.sh@10 -- # set +x 00:15:34.954 21:28:55 -- spdk/autotest.sh@70 -- # create_test_list 00:15:34.954 21:28:55 -- common/autotest_common.sh@736 -- # xtrace_disable 00:15:34.954 21:28:55 -- common/autotest_common.sh@10 -- # set +x 00:15:34.954 21:28:55 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:15:34.954 21:28:55 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:15:34.954 21:28:55 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:15:34.954 21:28:55 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:15:34.954 21:28:55 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:15:34.954 21:28:55 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:15:34.954 21:28:55 -- common/autotest_common.sh@1440 -- # uname 00:15:34.954 21:28:55 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:15:34.954 21:28:55 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:15:34.954 21:28:55 -- common/autotest_common.sh@1460 -- # uname 00:15:34.954 21:28:55 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:15:34.954 21:28:55 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:15:34.954 21:28:55 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:15:34.954 21:28:55 -- spdk/autotest.sh@83 -- # hash lcov 00:15:34.954 21:28:55 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:15:34.954 21:28:55 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:15:34.954 --rc lcov_branch_coverage=1 00:15:34.954 --rc lcov_function_coverage=1 00:15:34.954 --rc genhtml_branch_coverage=1 00:15:34.954 --rc genhtml_function_coverage=1 00:15:34.954 --rc genhtml_legend=1 00:15:34.954 --rc geninfo_all_blocks=1 00:15:34.954 ' 00:15:34.954 21:28:55 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:15:34.954 --rc lcov_branch_coverage=1 00:15:34.954 --rc lcov_function_coverage=1 00:15:34.954 --rc genhtml_branch_coverage=1 00:15:34.954 --rc genhtml_function_coverage=1 00:15:34.954 --rc genhtml_legend=1 00:15:34.954 --rc geninfo_all_blocks=1 00:15:34.954 ' 00:15:34.954 21:28:55 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:15:34.954 --rc lcov_branch_coverage=1 00:15:34.954 --rc lcov_function_coverage=1 00:15:34.954 --rc genhtml_branch_coverage=1 00:15:34.954 --rc genhtml_function_coverage=1 00:15:34.954 --rc genhtml_legend=1 00:15:34.954 --rc geninfo_all_blocks=1 00:15:34.954 --no-external' 00:15:34.954 21:28:55 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:15:34.954 --rc lcov_branch_coverage=1 00:15:34.954 --rc lcov_function_coverage=1 00:15:34.954 --rc genhtml_branch_coverage=1 00:15:34.954 --rc genhtml_function_coverage=1 00:15:34.954 --rc genhtml_legend=1 00:15:34.954 --rc geninfo_all_blocks=1 00:15:34.954 --no-external' 00:15:34.955 21:28:55 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:15:34.955 lcov: LCOV version 1.14 00:15:34.955 21:28:55 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:15:44.920 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:15:44.920 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:15:44.920 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:15:44.920 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:15:44.920 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:15:44.920 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:16:06.863 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:16:06.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:16:06.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:16:06.864 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:16:06.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:16:06.865 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:16:06.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:16:06.865 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:16:06.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:16:06.865 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:16:06.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:16:06.865 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:16:06.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:16:06.865 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:16:06.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:16:06.865 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:16:06.865 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:16:07.800 21:29:28 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:16:07.800 21:29:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:07.800 21:29:28 -- common/autotest_common.sh@10 -- # set +x 00:16:07.800 21:29:28 -- spdk/autotest.sh@102 -- # rm -f 00:16:07.800 21:29:28 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:08.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:08.636 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:16:08.636 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:16:08.636 21:29:29 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:16:08.636 21:29:29 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:16:08.636 21:29:29 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:16:08.636 21:29:29 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:16:08.636 21:29:29 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:08.636 21:29:29 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:16:08.636 21:29:29 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:16:08.636 21:29:29 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:08.636 21:29:29 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:08.636 21:29:29 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:08.636 21:29:29 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:16:08.636 21:29:29 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:16:08.636 21:29:29 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:08.636 21:29:29 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:08.636 21:29:29 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:08.636 21:29:29 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:16:08.636 21:29:29 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:16:08.636 21:29:29 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:16:08.636 21:29:29 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:08.636 21:29:29 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:08.636 21:29:29 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:16:08.636 21:29:29 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:16:08.636 21:29:29 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:16:08.636 21:29:29 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:08.636 21:29:29 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:16:08.636 21:29:29 -- spdk/autotest.sh@121 -- # grep -v p 00:16:08.636 21:29:29 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:16:08.636 21:29:29 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:16:08.636 21:29:29 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:16:08.636 21:29:29 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:16:08.636 21:29:29 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:16:08.636 21:29:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:08.636 No valid GPT data, bailing 00:16:08.636 21:29:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:08.636 21:29:29 -- scripts/common.sh@393 -- # pt= 00:16:08.636 21:29:29 -- scripts/common.sh@394 -- # return 1 00:16:08.636 21:29:29 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:16:08.636 1+0 records in 00:16:08.636 1+0 records out 00:16:08.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478154 s, 219 MB/s 00:16:08.636 21:29:29 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:16:08.636 21:29:29 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:16:08.636 21:29:29 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:16:08.636 21:29:29 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:16:08.636 21:29:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:16:08.636 No valid GPT data, bailing 00:16:08.636 21:29:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:08.636 21:29:29 -- scripts/common.sh@393 -- # pt= 00:16:08.636 21:29:29 -- scripts/common.sh@394 -- # return 1 00:16:08.636 21:29:29 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:16:08.636 1+0 records in 00:16:08.636 1+0 records out 00:16:08.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412457 s, 254 MB/s 00:16:08.636 21:29:29 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:16:08.636 21:29:29 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:16:08.636 21:29:29 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:16:08.636 21:29:29 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:16:08.636 21:29:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:16:08.894 No valid GPT data, bailing 00:16:08.894 21:29:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:16:08.894 21:29:29 -- scripts/common.sh@393 -- # pt= 00:16:08.894 21:29:29 -- scripts/common.sh@394 -- # return 1 00:16:08.894 21:29:29 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:16:08.894 1+0 records in 00:16:08.894 1+0 records out 00:16:08.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040319 s, 260 MB/s 00:16:08.894 21:29:29 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:16:08.894 21:29:29 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:16:08.894 21:29:29 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:16:08.894 21:29:29 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:16:08.894 21:29:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:16:08.894 No valid GPT data, bailing 00:16:08.894 21:29:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:16:08.894 21:29:29 -- scripts/common.sh@393 -- # pt= 00:16:08.894 21:29:29 -- scripts/common.sh@394 -- # return 1 00:16:08.894 21:29:29 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:16:08.894 1+0 records in 00:16:08.894 1+0 records out 00:16:08.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450452 s, 233 MB/s 00:16:08.894 21:29:29 -- spdk/autotest.sh@129 -- # sync 00:16:08.894 21:29:29 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:16:08.894 21:29:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:16:08.894 21:29:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:16:10.791 21:29:31 -- spdk/autotest.sh@135 -- # uname -s 00:16:10.791 21:29:31 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:16:10.791 21:29:31 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:16:10.791 21:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.791 21:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.791 21:29:31 -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 ************************************ 00:16:10.791 START TEST setup.sh 00:16:10.791 ************************************ 00:16:10.791 21:29:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:16:10.791 * Looking for test storage... 00:16:10.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:10.791 21:29:31 -- setup/test-setup.sh@10 -- # uname -s 00:16:10.791 21:29:31 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:16:10.791 21:29:31 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:16:10.791 21:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.791 21:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.791 21:29:31 -- common/autotest_common.sh@10 -- # set +x 00:16:10.791 ************************************ 00:16:10.791 START TEST acl 00:16:10.791 ************************************ 00:16:10.791 21:29:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:16:10.791 * Looking for test storage... 00:16:10.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:10.791 21:29:31 -- setup/acl.sh@10 -- # get_zoned_devs 00:16:10.791 21:29:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:16:10.791 21:29:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:16:10.792 21:29:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:16:10.792 21:29:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:10.792 21:29:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:16:10.792 21:29:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:16:10.792 21:29:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:10.792 21:29:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:10.792 21:29:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:10.792 21:29:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:16:10.792 21:29:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:16:10.792 21:29:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:10.792 21:29:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:10.792 21:29:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:10.792 21:29:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:16:10.792 21:29:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:16:10.792 21:29:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:16:10.792 21:29:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:10.792 21:29:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:10.792 21:29:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:16:10.792 21:29:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:16:10.792 21:29:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:16:10.792 21:29:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:10.792 21:29:31 -- setup/acl.sh@12 -- # devs=() 00:16:10.792 21:29:31 -- setup/acl.sh@12 -- # declare -a devs 00:16:10.792 21:29:31 -- setup/acl.sh@13 -- # drivers=() 00:16:10.792 21:29:31 -- setup/acl.sh@13 -- # declare -A drivers 00:16:10.792 21:29:31 -- setup/acl.sh@51 -- # setup reset 00:16:10.792 21:29:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:10.792 21:29:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:11.726 21:29:32 -- setup/acl.sh@52 -- # collect_setup_devs 00:16:11.726 21:29:32 -- setup/acl.sh@16 -- # local dev driver 00:16:11.726 21:29:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:11.726 21:29:32 -- setup/acl.sh@15 -- # setup output status 00:16:11.726 21:29:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:11.726 21:29:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:11.726 Hugepages 00:16:11.726 node hugesize free / total 00:16:11.726 21:29:32 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:16:11.726 21:29:32 -- setup/acl.sh@19 -- # continue 00:16:11.726 21:29:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:11.726 00:16:11.726 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:11.726 21:29:32 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:16:11.726 21:29:32 -- setup/acl.sh@19 -- # continue 00:16:11.726 21:29:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:11.726 21:29:32 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:16:11.726 21:29:32 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:16:11.726 21:29:32 -- setup/acl.sh@20 -- # continue 00:16:11.726 21:29:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:11.726 21:29:32 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:16:11.726 21:29:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:16:11.726 21:29:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:16:11.726 21:29:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:16:11.726 21:29:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:16:11.726 21:29:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:11.726 21:29:32 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:16:11.726 21:29:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:16:11.726 21:29:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:16:11.726 21:29:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:16:11.726 21:29:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:16:11.726 21:29:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:11.726 21:29:32 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:16:11.726 21:29:32 -- setup/acl.sh@54 -- # run_test denied denied 00:16:11.726 21:29:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:11.726 21:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.726 21:29:32 -- common/autotest_common.sh@10 -- # set +x 00:16:11.726 ************************************ 00:16:11.727 START TEST denied 00:16:11.727 ************************************ 00:16:11.727 21:29:32 -- common/autotest_common.sh@1104 -- # denied 00:16:11.727 21:29:32 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:16:11.727 21:29:32 -- setup/acl.sh@38 -- # setup output config 00:16:11.727 21:29:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:11.727 21:29:32 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:16:11.727 21:29:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:12.658 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:16:12.658 21:29:33 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:16:12.658 21:29:33 -- setup/acl.sh@28 -- # local dev driver 00:16:12.658 21:29:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:16:12.658 21:29:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:16:12.658 21:29:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:16:12.658 21:29:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:16:12.658 21:29:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:16:12.658 21:29:33 -- setup/acl.sh@41 -- # setup reset 00:16:12.658 21:29:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:12.658 21:29:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:13.224 00:16:13.224 real 0m1.410s 00:16:13.224 user 0m0.542s 00:16:13.224 sys 0m0.798s 00:16:13.224 21:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.224 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:16:13.224 ************************************ 00:16:13.224 END TEST denied 00:16:13.224 ************************************ 00:16:13.224 21:29:34 -- setup/acl.sh@55 -- # run_test allowed allowed 00:16:13.224 21:29:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:13.224 21:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:13.224 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:16:13.224 ************************************ 00:16:13.224 START TEST allowed 00:16:13.224 ************************************ 00:16:13.224 21:29:34 -- common/autotest_common.sh@1104 -- # allowed 00:16:13.224 21:29:34 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:16:13.224 21:29:34 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:16:13.224 21:29:34 -- setup/acl.sh@45 -- # setup output config 00:16:13.224 21:29:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:13.224 21:29:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:14.156 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:16:14.156 21:29:34 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:16:14.156 21:29:34 -- setup/acl.sh@28 -- # local dev driver 00:16:14.156 21:29:34 -- setup/acl.sh@30 -- # for dev in "$@" 00:16:14.156 21:29:34 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:16:14.156 21:29:34 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:16:14.156 21:29:34 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:16:14.156 21:29:34 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:16:14.156 21:29:34 -- setup/acl.sh@48 -- # setup reset 00:16:14.156 21:29:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:14.156 21:29:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:14.722 ************************************ 00:16:14.722 END TEST allowed 00:16:14.722 ************************************ 00:16:14.722 00:16:14.723 real 0m1.450s 00:16:14.723 user 0m0.631s 00:16:14.723 sys 0m0.810s 00:16:14.723 21:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.723 21:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:14.723 ************************************ 00:16:14.723 END TEST acl 00:16:14.723 ************************************ 00:16:14.723 00:16:14.723 real 0m4.048s 00:16:14.723 user 0m1.706s 00:16:14.723 sys 0m2.285s 00:16:14.723 21:29:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.723 21:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:14.723 21:29:35 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:16:14.723 21:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:14.723 21:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:14.723 21:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:14.723 ************************************ 00:16:14.723 START TEST hugepages 00:16:14.723 ************************************ 00:16:14.723 21:29:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:16:14.982 * Looking for test storage... 00:16:14.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:14.982 21:29:35 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:16:14.982 21:29:35 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:16:14.982 21:29:35 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:16:14.982 21:29:35 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:16:14.982 21:29:35 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:16:14.982 21:29:35 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:16:14.982 21:29:35 -- setup/common.sh@17 -- # local get=Hugepagesize 00:16:14.982 21:29:35 -- setup/common.sh@18 -- # local node= 00:16:14.982 21:29:35 -- setup/common.sh@19 -- # local var val 00:16:14.982 21:29:35 -- setup/common.sh@20 -- # local mem_f mem 00:16:14.982 21:29:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:14.982 21:29:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:14.982 21:29:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:14.982 21:29:35 -- setup/common.sh@28 -- # mapfile -t mem 00:16:14.982 21:29:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4616932 kB' 'MemAvailable: 7387552 kB' 'Buffers: 2436 kB' 'Cached: 2974812 kB' 'SwapCached: 0 kB' 'Active: 434616 kB' 'Inactive: 2645856 kB' 'Active(anon): 113716 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 104932 kB' 'Mapped: 48884 kB' 'Shmem: 10492 kB' 'KReclaimable: 81604 kB' 'Slab: 159944 kB' 'SReclaimable: 81604 kB' 'SUnreclaim: 78340 kB' 'KernelStack: 6700 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.982 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.982 21:29:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.983 21:29:35 -- setup/common.sh@32 -- # continue 00:16:14.983 21:29:35 -- setup/common.sh@31 -- # IFS=': ' 00:16:14.984 21:29:35 -- setup/common.sh@31 -- # read -r var val _ 00:16:14.984 21:29:35 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:14.984 21:29:35 -- setup/common.sh@33 -- # echo 2048 00:16:14.984 21:29:35 -- setup/common.sh@33 -- # return 0 00:16:14.984 21:29:35 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:16:14.984 21:29:35 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:16:14.984 21:29:35 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:16:14.984 21:29:35 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:16:14.984 21:29:35 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:16:14.984 21:29:35 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:16:14.984 21:29:35 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:16:14.984 21:29:35 -- setup/hugepages.sh@207 -- # get_nodes 00:16:14.984 21:29:35 -- setup/hugepages.sh@27 -- # local node 00:16:14.984 21:29:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:14.984 21:29:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:16:14.984 21:29:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:14.984 21:29:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:14.984 21:29:35 -- setup/hugepages.sh@208 -- # clear_hp 00:16:14.984 21:29:35 -- setup/hugepages.sh@37 -- # local node hp 00:16:14.984 21:29:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:14.984 21:29:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:14.984 21:29:35 -- setup/hugepages.sh@41 -- # echo 0 00:16:14.984 21:29:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:14.984 21:29:35 -- setup/hugepages.sh@41 -- # echo 0 00:16:14.984 21:29:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:16:14.984 21:29:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:16:14.984 21:29:35 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:16:14.984 21:29:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:14.984 21:29:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:14.984 21:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:14.984 ************************************ 00:16:14.984 START TEST default_setup 00:16:14.984 ************************************ 00:16:14.984 21:29:35 -- common/autotest_common.sh@1104 -- # default_setup 00:16:14.984 21:29:35 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:16:14.984 21:29:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:16:14.984 21:29:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:14.984 21:29:35 -- setup/hugepages.sh@51 -- # shift 00:16:14.984 21:29:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:14.984 21:29:35 -- setup/hugepages.sh@52 -- # local node_ids 00:16:14.984 21:29:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:14.984 21:29:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:14.984 21:29:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:14.984 21:29:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:14.984 21:29:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:14.984 21:29:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:14.984 21:29:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:14.984 21:29:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:14.984 21:29:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:14.984 21:29:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:14.984 21:29:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:14.984 21:29:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:16:14.984 21:29:35 -- setup/hugepages.sh@73 -- # return 0 00:16:14.984 21:29:35 -- setup/hugepages.sh@137 -- # setup output 00:16:14.984 21:29:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:14.984 21:29:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:15.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:15.811 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.811 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.811 21:29:36 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:16:15.811 21:29:36 -- setup/hugepages.sh@89 -- # local node 00:16:15.811 21:29:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:15.811 21:29:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:15.811 21:29:36 -- setup/hugepages.sh@92 -- # local surp 00:16:15.811 21:29:36 -- setup/hugepages.sh@93 -- # local resv 00:16:15.811 21:29:36 -- setup/hugepages.sh@94 -- # local anon 00:16:15.811 21:29:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:15.811 21:29:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:15.811 21:29:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:15.811 21:29:36 -- setup/common.sh@18 -- # local node= 00:16:15.811 21:29:36 -- setup/common.sh@19 -- # local var val 00:16:15.811 21:29:36 -- setup/common.sh@20 -- # local mem_f mem 00:16:15.811 21:29:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:15.811 21:29:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:15.811 21:29:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:15.811 21:29:36 -- setup/common.sh@28 -- # mapfile -t mem 00:16:15.811 21:29:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:15.811 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6703300 kB' 'MemAvailable: 9473796 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450648 kB' 'Inactive: 2645868 kB' 'Active(anon): 129748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645868 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120960 kB' 'Mapped: 49048 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159596 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78264 kB' 'KernelStack: 6576 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.812 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.812 21:29:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:15.812 21:29:36 -- setup/common.sh@33 -- # echo 0 00:16:15.812 21:29:36 -- setup/common.sh@33 -- # return 0 00:16:15.812 21:29:36 -- setup/hugepages.sh@97 -- # anon=0 00:16:15.812 21:29:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:15.812 21:29:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:15.813 21:29:36 -- setup/common.sh@18 -- # local node= 00:16:15.813 21:29:36 -- setup/common.sh@19 -- # local var val 00:16:15.813 21:29:36 -- setup/common.sh@20 -- # local mem_f mem 00:16:15.813 21:29:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:15.813 21:29:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:15.813 21:29:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:15.813 21:29:36 -- setup/common.sh@28 -- # mapfile -t mem 00:16:15.813 21:29:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6703048 kB' 'MemAvailable: 9473548 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450452 kB' 'Inactive: 2645872 kB' 'Active(anon): 129552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120684 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159580 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78248 kB' 'KernelStack: 6576 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.813 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.813 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:15.814 21:29:36 -- setup/common.sh@33 -- # echo 0 00:16:15.814 21:29:36 -- setup/common.sh@33 -- # return 0 00:16:15.814 21:29:36 -- setup/hugepages.sh@99 -- # surp=0 00:16:15.814 21:29:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:15.814 21:29:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:15.814 21:29:36 -- setup/common.sh@18 -- # local node= 00:16:15.814 21:29:36 -- setup/common.sh@19 -- # local var val 00:16:15.814 21:29:36 -- setup/common.sh@20 -- # local mem_f mem 00:16:15.814 21:29:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:15.814 21:29:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:15.814 21:29:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:15.814 21:29:36 -- setup/common.sh@28 -- # mapfile -t mem 00:16:15.814 21:29:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6703048 kB' 'MemAvailable: 9473548 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450304 kB' 'Inactive: 2645872 kB' 'Active(anon): 129404 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120528 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159568 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78236 kB' 'KernelStack: 6544 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.814 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.814 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:15.815 21:29:36 -- setup/common.sh@32 -- # continue 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:15.815 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.075 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.075 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.075 21:29:36 -- setup/common.sh@33 -- # echo 0 00:16:16.075 21:29:36 -- setup/common.sh@33 -- # return 0 00:16:16.075 21:29:36 -- setup/hugepages.sh@100 -- # resv=0 00:16:16.075 21:29:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:16.075 nr_hugepages=1024 00:16:16.075 resv_hugepages=0 00:16:16.075 surplus_hugepages=0 00:16:16.075 anon_hugepages=0 00:16:16.076 21:29:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:16.076 21:29:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:16.076 21:29:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:16.076 21:29:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:16.076 21:29:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:16.076 21:29:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:16.076 21:29:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:16.076 21:29:36 -- setup/common.sh@18 -- # local node= 00:16:16.076 21:29:36 -- setup/common.sh@19 -- # local var val 00:16:16.076 21:29:36 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.076 21:29:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.076 21:29:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:16.076 21:29:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:16.076 21:29:36 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.076 21:29:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6703048 kB' 'MemAvailable: 9473548 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450244 kB' 'Inactive: 2645872 kB' 'Active(anon): 129344 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120508 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159568 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78236 kB' 'KernelStack: 6528 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.076 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.076 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.077 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.077 21:29:36 -- setup/common.sh@33 -- # echo 1024 00:16:16.077 21:29:36 -- setup/common.sh@33 -- # return 0 00:16:16.077 21:29:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:16.077 21:29:36 -- setup/hugepages.sh@112 -- # get_nodes 00:16:16.077 21:29:36 -- setup/hugepages.sh@27 -- # local node 00:16:16.077 21:29:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:16.077 21:29:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:16.077 21:29:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:16.077 21:29:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:16.077 21:29:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:16.077 21:29:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:16.077 21:29:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:16.077 21:29:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:16.077 21:29:36 -- setup/common.sh@18 -- # local node=0 00:16:16.077 21:29:36 -- setup/common.sh@19 -- # local var val 00:16:16.077 21:29:36 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.077 21:29:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.077 21:29:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:16.077 21:29:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:16.077 21:29:36 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.077 21:29:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.077 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6703048 kB' 'MemUsed: 5538928 kB' 'SwapCached: 0 kB' 'Active: 450372 kB' 'Inactive: 2645872 kB' 'Active(anon): 129472 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2977240 kB' 'Mapped: 48924 kB' 'AnonPages: 120540 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81332 kB' 'Slab: 159568 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.078 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.078 21:29:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.079 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.079 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.079 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.079 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.079 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.079 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.079 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.079 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.079 21:29:36 -- setup/common.sh@32 -- # continue 00:16:16.079 21:29:36 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.079 21:29:36 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.079 21:29:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.079 21:29:36 -- setup/common.sh@33 -- # echo 0 00:16:16.079 21:29:36 -- setup/common.sh@33 -- # return 0 00:16:16.079 node0=1024 expecting 1024 00:16:16.079 ************************************ 00:16:16.079 END TEST default_setup 00:16:16.079 ************************************ 00:16:16.079 21:29:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:16.079 21:29:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:16.079 21:29:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:16.079 21:29:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:16.079 21:29:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:16.079 21:29:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:16.079 00:16:16.079 real 0m1.044s 00:16:16.079 user 0m0.476s 00:16:16.079 sys 0m0.451s 00:16:16.079 21:29:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.079 21:29:36 -- common/autotest_common.sh@10 -- # set +x 00:16:16.079 21:29:36 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:16:16.079 21:29:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:16.079 21:29:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.079 21:29:36 -- common/autotest_common.sh@10 -- # set +x 00:16:16.079 ************************************ 00:16:16.079 START TEST per_node_1G_alloc 00:16:16.079 ************************************ 00:16:16.079 21:29:36 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:16:16.079 21:29:36 -- setup/hugepages.sh@143 -- # local IFS=, 00:16:16.079 21:29:36 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:16:16.079 21:29:36 -- setup/hugepages.sh@49 -- # local size=1048576 00:16:16.079 21:29:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:16.079 21:29:36 -- setup/hugepages.sh@51 -- # shift 00:16:16.079 21:29:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:16.079 21:29:36 -- setup/hugepages.sh@52 -- # local node_ids 00:16:16.079 21:29:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:16.079 21:29:36 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:16:16.079 21:29:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:16.079 21:29:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:16.079 21:29:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:16.079 21:29:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:16.079 21:29:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:16.079 21:29:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:16.079 21:29:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:16.079 21:29:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:16.079 21:29:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:16.079 21:29:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:16:16.079 21:29:36 -- setup/hugepages.sh@73 -- # return 0 00:16:16.079 21:29:36 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:16:16.079 21:29:36 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:16:16.079 21:29:36 -- setup/hugepages.sh@146 -- # setup output 00:16:16.079 21:29:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:16.079 21:29:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:16.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:16.339 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:16.339 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:16.339 21:29:37 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:16:16.339 21:29:37 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:16:16.339 21:29:37 -- setup/hugepages.sh@89 -- # local node 00:16:16.339 21:29:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:16.339 21:29:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:16.339 21:29:37 -- setup/hugepages.sh@92 -- # local surp 00:16:16.339 21:29:37 -- setup/hugepages.sh@93 -- # local resv 00:16:16.339 21:29:37 -- setup/hugepages.sh@94 -- # local anon 00:16:16.339 21:29:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:16.339 21:29:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:16.339 21:29:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:16.339 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:16.339 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:16.339 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.339 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.339 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:16.339 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:16.339 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.339 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7754584 kB' 'MemAvailable: 10525084 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450836 kB' 'Inactive: 2645872 kB' 'Active(anon): 129936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121040 kB' 'Mapped: 48984 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159572 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78240 kB' 'KernelStack: 6568 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.339 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.339 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.340 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:16.340 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:16.340 21:29:37 -- setup/hugepages.sh@97 -- # anon=0 00:16:16.340 21:29:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:16.340 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:16.340 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:16.340 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:16.340 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.340 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.340 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:16.340 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:16.340 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.340 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7755100 kB' 'MemAvailable: 10525600 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450488 kB' 'Inactive: 2645872 kB' 'Active(anon): 129588 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120692 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159556 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78224 kB' 'KernelStack: 6560 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.340 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.340 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.341 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.341 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.602 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.602 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:16.602 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:16.602 21:29:37 -- setup/hugepages.sh@99 -- # surp=0 00:16:16.602 21:29:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:16.602 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:16.602 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:16.602 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:16.602 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.602 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.602 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:16.602 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:16.602 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.602 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.602 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7754884 kB' 'MemAvailable: 10525384 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450464 kB' 'Inactive: 2645872 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120684 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159540 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78208 kB' 'KernelStack: 6576 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.603 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.603 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:16.604 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:16.604 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:16.604 nr_hugepages=512 00:16:16.604 resv_hugepages=0 00:16:16.604 21:29:37 -- setup/hugepages.sh@100 -- # resv=0 00:16:16.604 21:29:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:16:16.604 21:29:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:16.604 surplus_hugepages=0 00:16:16.604 anon_hugepages=0 00:16:16.604 21:29:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:16.604 21:29:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:16.604 21:29:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:16.604 21:29:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:16:16.604 21:29:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:16.604 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:16.604 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:16.604 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:16.604 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.604 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.604 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:16.604 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:16.604 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.604 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7755300 kB' 'MemAvailable: 10525800 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450344 kB' 'Inactive: 2645872 kB' 'Active(anon): 129444 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120564 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159532 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78200 kB' 'KernelStack: 6560 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.604 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.604 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:16.605 21:29:37 -- setup/common.sh@33 -- # echo 512 00:16:16.605 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:16.605 21:29:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:16.605 21:29:37 -- setup/hugepages.sh@112 -- # get_nodes 00:16:16.605 21:29:37 -- setup/hugepages.sh@27 -- # local node 00:16:16.605 21:29:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:16.605 21:29:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:16.605 21:29:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:16.605 21:29:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:16.605 21:29:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:16.605 21:29:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:16.605 21:29:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:16.605 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:16.605 21:29:37 -- setup/common.sh@18 -- # local node=0 00:16:16.605 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:16.605 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.605 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.605 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:16.605 21:29:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:16.605 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.605 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7755300 kB' 'MemUsed: 4486676 kB' 'SwapCached: 0 kB' 'Active: 450520 kB' 'Inactive: 2645872 kB' 'Active(anon): 129620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2977240 kB' 'Mapped: 48924 kB' 'AnonPages: 120740 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81332 kB' 'Slab: 159532 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.605 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.605 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.606 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.606 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:16.606 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:16.606 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:16.606 node0=512 expecting 512 00:16:16.606 21:29:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:16.606 21:29:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:16.606 21:29:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:16.606 21:29:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:16.606 21:29:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:16:16.606 21:29:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:16:16.606 00:16:16.606 real 0m0.502s 00:16:16.606 user 0m0.232s 00:16:16.606 sys 0m0.272s 00:16:16.606 ************************************ 00:16:16.606 END TEST per_node_1G_alloc 00:16:16.606 ************************************ 00:16:16.606 21:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.606 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:16.606 21:29:37 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:16:16.606 21:29:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:16.606 21:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.606 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:16.606 ************************************ 00:16:16.606 START TEST even_2G_alloc 00:16:16.606 ************************************ 00:16:16.606 21:29:37 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:16:16.606 21:29:37 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:16:16.606 21:29:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:16:16.606 21:29:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:16.606 21:29:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:16.606 21:29:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:16.606 21:29:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:16.606 21:29:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:16.606 21:29:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:16.606 21:29:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:16.606 21:29:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:16.606 21:29:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:16.606 21:29:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:16.607 21:29:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:16.607 21:29:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:16.607 21:29:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:16.607 21:29:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:16:16.607 21:29:37 -- setup/hugepages.sh@83 -- # : 0 00:16:16.607 21:29:37 -- setup/hugepages.sh@84 -- # : 0 00:16:16.607 21:29:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:16.607 21:29:37 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:16:16.607 21:29:37 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:16:16.607 21:29:37 -- setup/hugepages.sh@153 -- # setup output 00:16:16.607 21:29:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:16.607 21:29:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:16.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:16.866 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:16.866 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:16.866 21:29:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:16:16.866 21:29:37 -- setup/hugepages.sh@89 -- # local node 00:16:16.866 21:29:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:16.866 21:29:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:16.866 21:29:37 -- setup/hugepages.sh@92 -- # local surp 00:16:16.866 21:29:37 -- setup/hugepages.sh@93 -- # local resv 00:16:16.866 21:29:37 -- setup/hugepages.sh@94 -- # local anon 00:16:16.866 21:29:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:16.866 21:29:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:16.866 21:29:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:16.866 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:16.866 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:16.866 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:16.866 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:16.866 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:16.866 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:16.866 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:16.866 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6706372 kB' 'MemAvailable: 9476872 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450716 kB' 'Inactive: 2645872 kB' 'Active(anon): 129816 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120920 kB' 'Mapped: 49028 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159552 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78220 kB' 'KernelStack: 6536 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.866 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.866 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.867 21:29:37 -- setup/common.sh@32 -- # continue 00:16:16.867 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:16.868 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:16.868 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:16.868 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:17.138 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:17.138 21:29:37 -- setup/hugepages.sh@97 -- # anon=0 00:16:17.138 21:29:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:17.138 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:17.138 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:17.138 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:17.138 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.138 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.138 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.138 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.138 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.138 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6706372 kB' 'MemAvailable: 9476872 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450368 kB' 'Inactive: 2645872 kB' 'Active(anon): 129468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120572 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159580 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78248 kB' 'KernelStack: 6560 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.138 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.138 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.139 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.139 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:17.139 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:17.139 21:29:37 -- setup/hugepages.sh@99 -- # surp=0 00:16:17.139 21:29:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:17.139 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:17.139 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:17.139 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:17.139 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.139 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.139 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.139 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.139 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.139 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.139 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6706372 kB' 'MemAvailable: 9476872 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450388 kB' 'Inactive: 2645872 kB' 'Active(anon): 129488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120588 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159580 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78248 kB' 'KernelStack: 6560 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.140 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.140 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.141 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:17.141 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:17.141 21:29:37 -- setup/hugepages.sh@100 -- # resv=0 00:16:17.141 21:29:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:17.141 nr_hugepages=1024 00:16:17.141 resv_hugepages=0 00:16:17.141 surplus_hugepages=0 00:16:17.141 anon_hugepages=0 00:16:17.141 21:29:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:17.141 21:29:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:17.141 21:29:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:17.141 21:29:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:17.141 21:29:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:17.141 21:29:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:17.141 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:17.141 21:29:37 -- setup/common.sh@18 -- # local node= 00:16:17.141 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:17.141 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.141 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.141 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.141 21:29:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.141 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.141 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6706372 kB' 'MemAvailable: 9476872 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450472 kB' 'Inactive: 2645872 kB' 'Active(anon): 129572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120712 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159580 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78248 kB' 'KernelStack: 6592 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.141 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.141 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.142 21:29:37 -- setup/common.sh@33 -- # echo 1024 00:16:17.142 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:17.142 21:29:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:17.142 21:29:37 -- setup/hugepages.sh@112 -- # get_nodes 00:16:17.142 21:29:37 -- setup/hugepages.sh@27 -- # local node 00:16:17.142 21:29:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:17.142 21:29:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:17.142 21:29:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:17.142 21:29:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:17.142 21:29:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:17.142 21:29:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:17.142 21:29:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:17.142 21:29:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:17.142 21:29:37 -- setup/common.sh@18 -- # local node=0 00:16:17.142 21:29:37 -- setup/common.sh@19 -- # local var val 00:16:17.142 21:29:37 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.142 21:29:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.142 21:29:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:17.142 21:29:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:17.142 21:29:37 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.142 21:29:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6707932 kB' 'MemUsed: 5534044 kB' 'SwapCached: 0 kB' 'Active: 450232 kB' 'Inactive: 2645872 kB' 'Active(anon): 129332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2977240 kB' 'Mapped: 48924 kB' 'AnonPages: 120464 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81332 kB' 'Slab: 159560 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.142 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.142 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # continue 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.143 21:29:37 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.143 21:29:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.143 21:29:37 -- setup/common.sh@33 -- # echo 0 00:16:17.143 21:29:37 -- setup/common.sh@33 -- # return 0 00:16:17.143 node0=1024 expecting 1024 00:16:17.143 ************************************ 00:16:17.143 END TEST even_2G_alloc 00:16:17.143 ************************************ 00:16:17.143 21:29:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:17.143 21:29:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:17.143 21:29:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:17.143 21:29:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:17.143 21:29:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:17.143 21:29:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:17.143 00:16:17.143 real 0m0.542s 00:16:17.143 user 0m0.257s 00:16:17.143 sys 0m0.281s 00:16:17.143 21:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.143 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:16:17.143 21:29:38 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:16:17.143 21:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:17.143 21:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.143 21:29:38 -- common/autotest_common.sh@10 -- # set +x 00:16:17.143 ************************************ 00:16:17.143 START TEST odd_alloc 00:16:17.143 ************************************ 00:16:17.143 21:29:38 -- common/autotest_common.sh@1104 -- # odd_alloc 00:16:17.143 21:29:38 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:16:17.143 21:29:38 -- setup/hugepages.sh@49 -- # local size=2098176 00:16:17.143 21:29:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:17.143 21:29:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:17.143 21:29:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:16:17.143 21:29:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:17.143 21:29:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:17.143 21:29:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:17.143 21:29:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:16:17.143 21:29:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:17.143 21:29:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:17.143 21:29:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:17.143 21:29:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:17.143 21:29:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:17.143 21:29:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:17.143 21:29:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:16:17.144 21:29:38 -- setup/hugepages.sh@83 -- # : 0 00:16:17.144 21:29:38 -- setup/hugepages.sh@84 -- # : 0 00:16:17.144 21:29:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:17.144 21:29:38 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:16:17.144 21:29:38 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:16:17.144 21:29:38 -- setup/hugepages.sh@160 -- # setup output 00:16:17.144 21:29:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:17.144 21:29:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:17.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:17.401 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:17.401 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:17.401 21:29:38 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:16:17.401 21:29:38 -- setup/hugepages.sh@89 -- # local node 00:16:17.401 21:29:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:17.401 21:29:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:17.401 21:29:38 -- setup/hugepages.sh@92 -- # local surp 00:16:17.401 21:29:38 -- setup/hugepages.sh@93 -- # local resv 00:16:17.401 21:29:38 -- setup/hugepages.sh@94 -- # local anon 00:16:17.401 21:29:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:17.662 21:29:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:17.662 21:29:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:17.662 21:29:38 -- setup/common.sh@18 -- # local node= 00:16:17.662 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:17.662 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.662 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.662 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.662 21:29:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.662 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.662 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.662 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704344 kB' 'MemAvailable: 9474844 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450408 kB' 'Inactive: 2645872 kB' 'Active(anon): 129508 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120868 kB' 'Mapped: 49296 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159572 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78240 kB' 'KernelStack: 6536 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.662 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.662 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.663 21:29:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:17.663 21:29:38 -- setup/common.sh@33 -- # echo 0 00:16:17.663 21:29:38 -- setup/common.sh@33 -- # return 0 00:16:17.663 21:29:38 -- setup/hugepages.sh@97 -- # anon=0 00:16:17.663 21:29:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:17.663 21:29:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:17.663 21:29:38 -- setup/common.sh@18 -- # local node= 00:16:17.663 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:17.663 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.663 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.663 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.663 21:29:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.663 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.663 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.663 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.663 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704456 kB' 'MemAvailable: 9474956 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450256 kB' 'Inactive: 2645872 kB' 'Active(anon): 129356 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120504 kB' 'Mapped: 49052 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159576 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78244 kB' 'KernelStack: 6552 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.664 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.664 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.665 21:29:38 -- setup/common.sh@33 -- # echo 0 00:16:17.665 21:29:38 -- setup/common.sh@33 -- # return 0 00:16:17.665 21:29:38 -- setup/hugepages.sh@99 -- # surp=0 00:16:17.665 21:29:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:17.665 21:29:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:17.665 21:29:38 -- setup/common.sh@18 -- # local node= 00:16:17.665 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:17.665 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.665 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.665 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.665 21:29:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.665 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.665 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704456 kB' 'MemAvailable: 9474956 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450168 kB' 'Inactive: 2645872 kB' 'Active(anon): 129268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120716 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159564 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78232 kB' 'KernelStack: 6576 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.665 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.665 21:29:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:17.666 21:29:38 -- setup/common.sh@33 -- # echo 0 00:16:17.666 21:29:38 -- setup/common.sh@33 -- # return 0 00:16:17.666 nr_hugepages=1025 00:16:17.666 21:29:38 -- setup/hugepages.sh@100 -- # resv=0 00:16:17.666 21:29:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:16:17.666 resv_hugepages=0 00:16:17.666 surplus_hugepages=0 00:16:17.666 21:29:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:17.666 21:29:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:17.666 anon_hugepages=0 00:16:17.666 21:29:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:17.666 21:29:38 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:16:17.666 21:29:38 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:16:17.666 21:29:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:17.666 21:29:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:17.666 21:29:38 -- setup/common.sh@18 -- # local node= 00:16:17.666 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:17.666 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.666 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.666 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:17.666 21:29:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:17.666 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.666 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704456 kB' 'MemAvailable: 9474956 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450372 kB' 'Inactive: 2645872 kB' 'Active(anon): 129472 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120684 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81332 kB' 'Slab: 159556 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78224 kB' 'KernelStack: 6576 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.666 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.666 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.667 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.667 21:29:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:17.668 21:29:38 -- setup/common.sh@33 -- # echo 1025 00:16:17.668 21:29:38 -- setup/common.sh@33 -- # return 0 00:16:17.668 21:29:38 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:16:17.668 21:29:38 -- setup/hugepages.sh@112 -- # get_nodes 00:16:17.668 21:29:38 -- setup/hugepages.sh@27 -- # local node 00:16:17.668 21:29:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:17.668 21:29:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:16:17.668 21:29:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:17.668 21:29:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:17.668 21:29:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:17.668 21:29:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:17.668 21:29:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:17.668 21:29:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:17.668 21:29:38 -- setup/common.sh@18 -- # local node=0 00:16:17.668 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:17.668 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:17.668 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:17.668 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:17.668 21:29:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:17.668 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:17.668 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704208 kB' 'MemUsed: 5537768 kB' 'SwapCached: 0 kB' 'Active: 450124 kB' 'Inactive: 2645872 kB' 'Active(anon): 129224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2977240 kB' 'Mapped: 48924 kB' 'AnonPages: 120660 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81332 kB' 'Slab: 159556 kB' 'SReclaimable: 81332 kB' 'SUnreclaim: 78224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.668 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.668 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # continue 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:17.669 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:17.669 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:17.669 21:29:38 -- setup/common.sh@33 -- # echo 0 00:16:17.669 21:29:38 -- setup/common.sh@33 -- # return 0 00:16:17.669 21:29:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:17.669 21:29:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:17.669 node0=1025 expecting 1025 00:16:17.669 ************************************ 00:16:17.669 END TEST odd_alloc 00:16:17.669 ************************************ 00:16:17.669 21:29:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:17.669 21:29:38 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:16:17.669 21:29:38 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:16:17.669 00:16:17.669 real 0m0.501s 00:16:17.669 user 0m0.236s 00:16:17.669 sys 0m0.272s 00:16:17.669 21:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.669 21:29:38 -- common/autotest_common.sh@10 -- # set +x 00:16:17.669 21:29:38 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:16:17.669 21:29:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:17.669 21:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.669 21:29:38 -- common/autotest_common.sh@10 -- # set +x 00:16:17.669 ************************************ 00:16:17.669 START TEST custom_alloc 00:16:17.669 ************************************ 00:16:17.669 21:29:38 -- common/autotest_common.sh@1104 -- # custom_alloc 00:16:17.669 21:29:38 -- setup/hugepages.sh@167 -- # local IFS=, 00:16:17.669 21:29:38 -- setup/hugepages.sh@169 -- # local node 00:16:17.669 21:29:38 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:16:17.669 21:29:38 -- setup/hugepages.sh@170 -- # local nodes_hp 00:16:17.669 21:29:38 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:16:17.669 21:29:38 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:16:17.669 21:29:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:16:17.669 21:29:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:16:17.669 21:29:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:17.669 21:29:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:17.669 21:29:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:17.669 21:29:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:17.669 21:29:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:17.669 21:29:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:17.669 21:29:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:17.669 21:29:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:16:17.669 21:29:38 -- setup/hugepages.sh@83 -- # : 0 00:16:17.669 21:29:38 -- setup/hugepages.sh@84 -- # : 0 00:16:17.669 21:29:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:16:17.669 21:29:38 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:16:17.669 21:29:38 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:16:17.669 21:29:38 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:16:17.669 21:29:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:17.669 21:29:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:17.669 21:29:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:17.669 21:29:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:17.669 21:29:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:17.669 21:29:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:17.669 21:29:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:16:17.669 21:29:38 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:16:17.669 21:29:38 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:16:17.669 21:29:38 -- setup/hugepages.sh@78 -- # return 0 00:16:17.669 21:29:38 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:16:17.669 21:29:38 -- setup/hugepages.sh@187 -- # setup output 00:16:17.669 21:29:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:17.669 21:29:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:17.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:18.190 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:18.190 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:18.190 21:29:38 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:16:18.190 21:29:38 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:16:18.190 21:29:38 -- setup/hugepages.sh@89 -- # local node 00:16:18.190 21:29:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:18.190 21:29:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:18.190 21:29:38 -- setup/hugepages.sh@92 -- # local surp 00:16:18.190 21:29:38 -- setup/hugepages.sh@93 -- # local resv 00:16:18.190 21:29:38 -- setup/hugepages.sh@94 -- # local anon 00:16:18.190 21:29:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:18.190 21:29:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:18.190 21:29:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:18.190 21:29:38 -- setup/common.sh@18 -- # local node= 00:16:18.190 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:18.190 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.190 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.190 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.190 21:29:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.190 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.190 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7754192 kB' 'MemAvailable: 10524700 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450648 kB' 'Inactive: 2645872 kB' 'Active(anon): 129748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120896 kB' 'Mapped: 49036 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159588 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78240 kB' 'KernelStack: 6552 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.190 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.190 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.191 21:29:38 -- setup/common.sh@33 -- # echo 0 00:16:18.191 21:29:38 -- setup/common.sh@33 -- # return 0 00:16:18.191 21:29:38 -- setup/hugepages.sh@97 -- # anon=0 00:16:18.191 21:29:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:18.191 21:29:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:18.191 21:29:38 -- setup/common.sh@18 -- # local node= 00:16:18.191 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:18.191 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.191 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.191 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.191 21:29:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.191 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.191 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7754144 kB' 'MemAvailable: 10524652 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450668 kB' 'Inactive: 2645872 kB' 'Active(anon): 129768 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120876 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159592 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78244 kB' 'KernelStack: 6560 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.191 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.191 21:29:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.192 21:29:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.192 21:29:38 -- setup/common.sh@33 -- # echo 0 00:16:18.192 21:29:38 -- setup/common.sh@33 -- # return 0 00:16:18.192 21:29:38 -- setup/hugepages.sh@99 -- # surp=0 00:16:18.192 21:29:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:18.192 21:29:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:18.192 21:29:38 -- setup/common.sh@18 -- # local node= 00:16:18.192 21:29:38 -- setup/common.sh@19 -- # local var val 00:16:18.192 21:29:38 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.192 21:29:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.192 21:29:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.192 21:29:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.192 21:29:38 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.192 21:29:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.192 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.192 21:29:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7753892 kB' 'MemAvailable: 10524400 kB' 'Buffers: 2436 kB' 'Cached: 2974804 kB' 'SwapCached: 0 kB' 'Active: 450264 kB' 'Inactive: 2645872 kB' 'Active(anon): 129364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120504 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159584 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78236 kB' 'KernelStack: 6544 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:38 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:38 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.193 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.193 21:29:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.194 21:29:39 -- setup/common.sh@33 -- # echo 0 00:16:18.194 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:18.194 nr_hugepages=512 00:16:18.194 resv_hugepages=0 00:16:18.194 surplus_hugepages=0 00:16:18.194 anon_hugepages=0 00:16:18.194 21:29:39 -- setup/hugepages.sh@100 -- # resv=0 00:16:18.194 21:29:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:16:18.194 21:29:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:18.194 21:29:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:18.194 21:29:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:18.194 21:29:39 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:18.194 21:29:39 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:16:18.194 21:29:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:18.194 21:29:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:18.194 21:29:39 -- setup/common.sh@18 -- # local node= 00:16:18.194 21:29:39 -- setup/common.sh@19 -- # local var val 00:16:18.194 21:29:39 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.194 21:29:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.194 21:29:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.194 21:29:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.194 21:29:39 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.194 21:29:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7753892 kB' 'MemAvailable: 10524404 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 450616 kB' 'Inactive: 2645876 kB' 'Active(anon): 129716 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120612 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159584 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78236 kB' 'KernelStack: 6544 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.194 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.194 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.195 21:29:39 -- setup/common.sh@33 -- # echo 512 00:16:18.195 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:18.195 21:29:39 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:18.195 21:29:39 -- setup/hugepages.sh@112 -- # get_nodes 00:16:18.195 21:29:39 -- setup/hugepages.sh@27 -- # local node 00:16:18.195 21:29:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:18.195 21:29:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:18.195 21:29:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:18.195 21:29:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:18.195 21:29:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:18.195 21:29:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:18.195 21:29:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:18.195 21:29:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:18.195 21:29:39 -- setup/common.sh@18 -- # local node=0 00:16:18.195 21:29:39 -- setup/common.sh@19 -- # local var val 00:16:18.195 21:29:39 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.195 21:29:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.195 21:29:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:18.195 21:29:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:18.195 21:29:39 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.195 21:29:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7753640 kB' 'MemUsed: 4488336 kB' 'SwapCached: 0 kB' 'Active: 450360 kB' 'Inactive: 2645876 kB' 'Active(anon): 129460 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2977244 kB' 'Mapped: 48924 kB' 'AnonPages: 120612 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81348 kB' 'Slab: 159580 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.195 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.195 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.196 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.196 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.196 21:29:39 -- setup/common.sh@33 -- # echo 0 00:16:18.196 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:18.196 node0=512 expecting 512 00:16:18.196 21:29:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:18.196 21:29:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:18.196 21:29:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:18.196 21:29:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:18.196 21:29:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:16:18.196 21:29:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:16:18.196 00:16:18.196 real 0m0.528s 00:16:18.196 user 0m0.261s 00:16:18.196 sys 0m0.280s 00:16:18.196 ************************************ 00:16:18.196 END TEST custom_alloc 00:16:18.196 ************************************ 00:16:18.196 21:29:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.196 21:29:39 -- common/autotest_common.sh@10 -- # set +x 00:16:18.196 21:29:39 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:16:18.196 21:29:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:18.196 21:29:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:18.196 21:29:39 -- common/autotest_common.sh@10 -- # set +x 00:16:18.476 ************************************ 00:16:18.476 START TEST no_shrink_alloc 00:16:18.476 ************************************ 00:16:18.476 21:29:39 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:16:18.476 21:29:39 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:16:18.476 21:29:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:16:18.476 21:29:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:18.476 21:29:39 -- setup/hugepages.sh@51 -- # shift 00:16:18.476 21:29:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:18.476 21:29:39 -- setup/hugepages.sh@52 -- # local node_ids 00:16:18.476 21:29:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:18.476 21:29:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:18.476 21:29:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:18.476 21:29:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:18.476 21:29:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:18.476 21:29:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:18.476 21:29:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:18.476 21:29:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:18.476 21:29:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:18.476 21:29:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:18.476 21:29:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:18.476 21:29:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:16:18.476 21:29:39 -- setup/hugepages.sh@73 -- # return 0 00:16:18.476 21:29:39 -- setup/hugepages.sh@198 -- # setup output 00:16:18.476 21:29:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:18.476 21:29:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:18.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:18.739 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:18.739 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:18.739 21:29:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:16:18.739 21:29:39 -- setup/hugepages.sh@89 -- # local node 00:16:18.739 21:29:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:18.739 21:29:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:18.739 21:29:39 -- setup/hugepages.sh@92 -- # local surp 00:16:18.739 21:29:39 -- setup/hugepages.sh@93 -- # local resv 00:16:18.739 21:29:39 -- setup/hugepages.sh@94 -- # local anon 00:16:18.739 21:29:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:18.739 21:29:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:18.739 21:29:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:18.739 21:29:39 -- setup/common.sh@18 -- # local node= 00:16:18.739 21:29:39 -- setup/common.sh@19 -- # local var val 00:16:18.739 21:29:39 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.739 21:29:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.739 21:29:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.739 21:29:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.739 21:29:39 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.739 21:29:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6702476 kB' 'MemAvailable: 9472988 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 450680 kB' 'Inactive: 2645876 kB' 'Active(anon): 129780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120888 kB' 'Mapped: 49052 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159576 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78228 kB' 'KernelStack: 6592 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.739 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.739 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:18.740 21:29:39 -- setup/common.sh@33 -- # echo 0 00:16:18.740 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:18.740 21:29:39 -- setup/hugepages.sh@97 -- # anon=0 00:16:18.740 21:29:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:18.740 21:29:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:18.740 21:29:39 -- setup/common.sh@18 -- # local node= 00:16:18.740 21:29:39 -- setup/common.sh@19 -- # local var val 00:16:18.740 21:29:39 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.740 21:29:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.740 21:29:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.740 21:29:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.740 21:29:39 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.740 21:29:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6702476 kB' 'MemAvailable: 9472988 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 450436 kB' 'Inactive: 2645876 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120640 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159576 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78228 kB' 'KernelStack: 6560 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.740 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.740 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.741 21:29:39 -- setup/common.sh@33 -- # echo 0 00:16:18.741 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:18.741 21:29:39 -- setup/hugepages.sh@99 -- # surp=0 00:16:18.741 21:29:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:18.741 21:29:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:18.741 21:29:39 -- setup/common.sh@18 -- # local node= 00:16:18.741 21:29:39 -- setup/common.sh@19 -- # local var val 00:16:18.741 21:29:39 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.741 21:29:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.741 21:29:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.741 21:29:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.741 21:29:39 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.741 21:29:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6702476 kB' 'MemAvailable: 9472988 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 450676 kB' 'Inactive: 2645876 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120876 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159576 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78228 kB' 'KernelStack: 6560 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.741 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.741 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:18.742 21:29:39 -- setup/common.sh@33 -- # echo 0 00:16:18.742 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:18.742 21:29:39 -- setup/hugepages.sh@100 -- # resv=0 00:16:18.742 21:29:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:18.742 nr_hugepages=1024 00:16:18.742 21:29:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:18.742 resv_hugepages=0 00:16:18.742 surplus_hugepages=0 00:16:18.742 anon_hugepages=0 00:16:18.742 21:29:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:18.742 21:29:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:18.742 21:29:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:18.742 21:29:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:18.742 21:29:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:18.742 21:29:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:18.742 21:29:39 -- setup/common.sh@18 -- # local node= 00:16:18.742 21:29:39 -- setup/common.sh@19 -- # local var val 00:16:18.742 21:29:39 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.742 21:29:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.742 21:29:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:18.742 21:29:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:18.742 21:29:39 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.742 21:29:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6702476 kB' 'MemAvailable: 9472988 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 450536 kB' 'Inactive: 2645876 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120780 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159576 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78228 kB' 'KernelStack: 6576 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.742 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.742 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.743 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.743 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:18.744 21:29:39 -- setup/common.sh@33 -- # echo 1024 00:16:18.744 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:18.744 21:29:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:18.744 21:29:39 -- setup/hugepages.sh@112 -- # get_nodes 00:16:18.744 21:29:39 -- setup/hugepages.sh@27 -- # local node 00:16:18.744 21:29:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:18.744 21:29:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:18.744 21:29:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:18.744 21:29:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:18.744 21:29:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:18.744 21:29:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:18.744 21:29:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:18.744 21:29:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:18.744 21:29:39 -- setup/common.sh@18 -- # local node=0 00:16:18.744 21:29:39 -- setup/common.sh@19 -- # local var val 00:16:18.744 21:29:39 -- setup/common.sh@20 -- # local mem_f mem 00:16:18.744 21:29:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:18.744 21:29:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:18.744 21:29:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:18.744 21:29:39 -- setup/common.sh@28 -- # mapfile -t mem 00:16:18.744 21:29:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6702476 kB' 'MemUsed: 5539500 kB' 'SwapCached: 0 kB' 'Active: 450344 kB' 'Inactive: 2645876 kB' 'Active(anon): 129444 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2977244 kB' 'Mapped: 48924 kB' 'AnonPages: 120812 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81348 kB' 'Slab: 159576 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 78228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.744 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.744 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # continue 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # IFS=': ' 00:16:18.745 21:29:39 -- setup/common.sh@31 -- # read -r var val _ 00:16:18.745 21:29:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:18.745 21:29:39 -- setup/common.sh@33 -- # echo 0 00:16:18.745 21:29:39 -- setup/common.sh@33 -- # return 0 00:16:19.003 21:29:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:19.003 21:29:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:19.003 21:29:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:19.003 21:29:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:19.003 21:29:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:19.003 node0=1024 expecting 1024 00:16:19.003 21:29:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:19.003 21:29:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:16:19.003 21:29:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:16:19.003 21:29:39 -- setup/hugepages.sh@202 -- # setup output 00:16:19.003 21:29:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:19.003 21:29:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:19.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:19.263 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.263 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:19.263 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:16:19.263 21:29:40 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:16:19.263 21:29:40 -- setup/hugepages.sh@89 -- # local node 00:16:19.263 21:29:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:19.263 21:29:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:19.263 21:29:40 -- setup/hugepages.sh@92 -- # local surp 00:16:19.263 21:29:40 -- setup/hugepages.sh@93 -- # local resv 00:16:19.263 21:29:40 -- setup/hugepages.sh@94 -- # local anon 00:16:19.263 21:29:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:19.263 21:29:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:19.263 21:29:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:19.263 21:29:40 -- setup/common.sh@18 -- # local node= 00:16:19.263 21:29:40 -- setup/common.sh@19 -- # local var val 00:16:19.263 21:29:40 -- setup/common.sh@20 -- # local mem_f mem 00:16:19.263 21:29:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:19.263 21:29:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:19.263 21:29:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:19.263 21:29:40 -- setup/common.sh@28 -- # mapfile -t mem 00:16:19.263 21:29:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704844 kB' 'MemAvailable: 9475356 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 446176 kB' 'Inactive: 2645876 kB' 'Active(anon): 125276 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116428 kB' 'Mapped: 48300 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159316 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 77968 kB' 'KernelStack: 6568 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.263 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.263 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:19.264 21:29:40 -- setup/common.sh@33 -- # echo 0 00:16:19.264 21:29:40 -- setup/common.sh@33 -- # return 0 00:16:19.264 21:29:40 -- setup/hugepages.sh@97 -- # anon=0 00:16:19.264 21:29:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:19.264 21:29:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:19.264 21:29:40 -- setup/common.sh@18 -- # local node= 00:16:19.264 21:29:40 -- setup/common.sh@19 -- # local var val 00:16:19.264 21:29:40 -- setup/common.sh@20 -- # local mem_f mem 00:16:19.264 21:29:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:19.264 21:29:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:19.264 21:29:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:19.264 21:29:40 -- setup/common.sh@28 -- # mapfile -t mem 00:16:19.264 21:29:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704844 kB' 'MemAvailable: 9475356 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 445536 kB' 'Inactive: 2645876 kB' 'Active(anon): 124636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115820 kB' 'Mapped: 48184 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159316 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 77968 kB' 'KernelStack: 6464 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.264 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.264 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.265 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.265 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.266 21:29:40 -- setup/common.sh@33 -- # echo 0 00:16:19.266 21:29:40 -- setup/common.sh@33 -- # return 0 00:16:19.266 21:29:40 -- setup/hugepages.sh@99 -- # surp=0 00:16:19.266 21:29:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:19.266 21:29:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:19.266 21:29:40 -- setup/common.sh@18 -- # local node= 00:16:19.266 21:29:40 -- setup/common.sh@19 -- # local var val 00:16:19.266 21:29:40 -- setup/common.sh@20 -- # local mem_f mem 00:16:19.266 21:29:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:19.266 21:29:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:19.266 21:29:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:19.266 21:29:40 -- setup/common.sh@28 -- # mapfile -t mem 00:16:19.266 21:29:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704844 kB' 'MemAvailable: 9475356 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 445532 kB' 'Inactive: 2645876 kB' 'Active(anon): 124632 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115812 kB' 'Mapped: 48184 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159316 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 77968 kB' 'KernelStack: 6464 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.266 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.266 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:19.267 21:29:40 -- setup/common.sh@33 -- # echo 0 00:16:19.267 21:29:40 -- setup/common.sh@33 -- # return 0 00:16:19.267 21:29:40 -- setup/hugepages.sh@100 -- # resv=0 00:16:19.267 21:29:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:19.267 nr_hugepages=1024 00:16:19.267 resv_hugepages=0 00:16:19.267 surplus_hugepages=0 00:16:19.267 anon_hugepages=0 00:16:19.267 21:29:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:19.267 21:29:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:19.267 21:29:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:19.267 21:29:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:19.267 21:29:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:19.267 21:29:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:19.267 21:29:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:19.267 21:29:40 -- setup/common.sh@18 -- # local node= 00:16:19.267 21:29:40 -- setup/common.sh@19 -- # local var val 00:16:19.267 21:29:40 -- setup/common.sh@20 -- # local mem_f mem 00:16:19.267 21:29:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:19.267 21:29:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:19.267 21:29:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:19.267 21:29:40 -- setup/common.sh@28 -- # mapfile -t mem 00:16:19.267 21:29:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704844 kB' 'MemAvailable: 9475356 kB' 'Buffers: 2436 kB' 'Cached: 2974808 kB' 'SwapCached: 0 kB' 'Active: 445604 kB' 'Inactive: 2645876 kB' 'Active(anon): 124704 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115856 kB' 'Mapped: 48184 kB' 'Shmem: 10468 kB' 'KReclaimable: 81348 kB' 'Slab: 159316 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 77968 kB' 'KernelStack: 6480 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.267 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.267 21:29:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.268 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.268 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:19.527 21:29:40 -- setup/common.sh@33 -- # echo 1024 00:16:19.527 21:29:40 -- setup/common.sh@33 -- # return 0 00:16:19.527 21:29:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:19.527 21:29:40 -- setup/hugepages.sh@112 -- # get_nodes 00:16:19.527 21:29:40 -- setup/hugepages.sh@27 -- # local node 00:16:19.527 21:29:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:19.527 21:29:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:19.527 21:29:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:19.527 21:29:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:19.527 21:29:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:19.527 21:29:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:19.527 21:29:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:19.527 21:29:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:19.527 21:29:40 -- setup/common.sh@18 -- # local node=0 00:16:19.527 21:29:40 -- setup/common.sh@19 -- # local var val 00:16:19.527 21:29:40 -- setup/common.sh@20 -- # local mem_f mem 00:16:19.527 21:29:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:19.527 21:29:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:19.527 21:29:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:19.527 21:29:40 -- setup/common.sh@28 -- # mapfile -t mem 00:16:19.527 21:29:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6704844 kB' 'MemUsed: 5537132 kB' 'SwapCached: 0 kB' 'Active: 445484 kB' 'Inactive: 2645876 kB' 'Active(anon): 124584 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2645876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2977244 kB' 'Mapped: 48184 kB' 'AnonPages: 115692 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81348 kB' 'Slab: 159316 kB' 'SReclaimable: 81348 kB' 'SUnreclaim: 77968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.527 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.527 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # continue 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # IFS=': ' 00:16:19.528 21:29:40 -- setup/common.sh@31 -- # read -r var val _ 00:16:19.528 21:29:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:19.528 21:29:40 -- setup/common.sh@33 -- # echo 0 00:16:19.528 21:29:40 -- setup/common.sh@33 -- # return 0 00:16:19.528 21:29:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:19.528 21:29:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:19.528 21:29:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:19.528 node0=1024 expecting 1024 00:16:19.528 21:29:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:19.528 21:29:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:19.528 21:29:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:19.528 00:16:19.528 real 0m1.133s 00:16:19.528 user 0m0.538s 00:16:19.528 sys 0m0.582s 00:16:19.528 21:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.528 ************************************ 00:16:19.528 END TEST no_shrink_alloc 00:16:19.528 ************************************ 00:16:19.528 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.528 21:29:40 -- setup/hugepages.sh@217 -- # clear_hp 00:16:19.528 21:29:40 -- setup/hugepages.sh@37 -- # local node hp 00:16:19.528 21:29:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:19.528 21:29:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:19.528 21:29:40 -- setup/hugepages.sh@41 -- # echo 0 00:16:19.528 21:29:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:19.528 21:29:40 -- setup/hugepages.sh@41 -- # echo 0 00:16:19.528 21:29:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:16:19.528 21:29:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:16:19.528 00:16:19.528 real 0m4.671s 00:16:19.528 user 0m2.144s 00:16:19.528 sys 0m2.390s 00:16:19.528 21:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.528 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.528 ************************************ 00:16:19.528 END TEST hugepages 00:16:19.528 ************************************ 00:16:19.528 21:29:40 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:16:19.528 21:29:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:19.528 21:29:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:19.528 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:16:19.528 ************************************ 00:16:19.528 START TEST driver 00:16:19.528 ************************************ 00:16:19.528 21:29:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:16:19.528 * Looking for test storage... 00:16:19.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:19.528 21:29:40 -- setup/driver.sh@68 -- # setup reset 00:16:19.528 21:29:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:19.528 21:29:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:20.092 21:29:41 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:16:20.092 21:29:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:20.092 21:29:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:20.092 21:29:41 -- common/autotest_common.sh@10 -- # set +x 00:16:20.348 ************************************ 00:16:20.348 START TEST guess_driver 00:16:20.348 ************************************ 00:16:20.348 21:29:41 -- common/autotest_common.sh@1104 -- # guess_driver 00:16:20.348 21:29:41 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:16:20.348 21:29:41 -- setup/driver.sh@47 -- # local fail=0 00:16:20.348 21:29:41 -- setup/driver.sh@49 -- # pick_driver 00:16:20.348 21:29:41 -- setup/driver.sh@36 -- # vfio 00:16:20.348 21:29:41 -- setup/driver.sh@21 -- # local iommu_grups 00:16:20.348 21:29:41 -- setup/driver.sh@22 -- # local unsafe_vfio 00:16:20.348 21:29:41 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:16:20.348 21:29:41 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:16:20.348 21:29:41 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:16:20.348 21:29:41 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:16:20.349 21:29:41 -- setup/driver.sh@32 -- # return 1 00:16:20.349 21:29:41 -- setup/driver.sh@38 -- # uio 00:16:20.349 21:29:41 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:16:20.349 21:29:41 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:16:20.349 21:29:41 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:16:20.349 21:29:41 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:16:20.349 21:29:41 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:16:20.349 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:16:20.349 21:29:41 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:16:20.349 Looking for driver=uio_pci_generic 00:16:20.349 21:29:41 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:16:20.349 21:29:41 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:16:20.349 21:29:41 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:16:20.349 21:29:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:20.349 21:29:41 -- setup/driver.sh@45 -- # setup output config 00:16:20.349 21:29:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:20.349 21:29:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:20.914 21:29:41 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:16:20.914 21:29:41 -- setup/driver.sh@58 -- # continue 00:16:20.914 21:29:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:20.914 21:29:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:20.914 21:29:41 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:16:20.914 21:29:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:21.172 21:29:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:21.172 21:29:41 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:16:21.172 21:29:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:21.172 21:29:41 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:16:21.172 21:29:41 -- setup/driver.sh@65 -- # setup reset 00:16:21.172 21:29:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:21.172 21:29:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:21.739 00:16:21.739 real 0m1.418s 00:16:21.739 user 0m0.508s 00:16:21.739 sys 0m0.922s 00:16:21.739 21:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.739 ************************************ 00:16:21.739 END TEST guess_driver 00:16:21.739 ************************************ 00:16:21.739 21:29:42 -- common/autotest_common.sh@10 -- # set +x 00:16:21.739 00:16:21.739 real 0m2.119s 00:16:21.739 user 0m0.738s 00:16:21.739 sys 0m1.446s 00:16:21.739 21:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.739 21:29:42 -- common/autotest_common.sh@10 -- # set +x 00:16:21.739 ************************************ 00:16:21.739 END TEST driver 00:16:21.739 ************************************ 00:16:21.739 21:29:42 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:16:21.739 21:29:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:21.739 21:29:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:21.739 21:29:42 -- common/autotest_common.sh@10 -- # set +x 00:16:21.739 ************************************ 00:16:21.739 START TEST devices 00:16:21.739 ************************************ 00:16:21.739 21:29:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:16:21.739 * Looking for test storage... 00:16:21.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:21.739 21:29:42 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:16:21.739 21:29:42 -- setup/devices.sh@192 -- # setup reset 00:16:21.739 21:29:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:21.739 21:29:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:22.692 21:29:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:16:22.692 21:29:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:16:22.692 21:29:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:16:22.692 21:29:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:16:22.692 21:29:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:22.692 21:29:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:16:22.692 21:29:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:16:22.692 21:29:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:22.692 21:29:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:22.692 21:29:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:22.692 21:29:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:16:22.692 21:29:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:16:22.692 21:29:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:22.692 21:29:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:22.692 21:29:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:22.692 21:29:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:16:22.692 21:29:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:16:22.692 21:29:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:16:22.692 21:29:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:22.692 21:29:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:16:22.692 21:29:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:16:22.692 21:29:43 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:16:22.692 21:29:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:16:22.692 21:29:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:16:22.692 21:29:43 -- setup/devices.sh@196 -- # blocks=() 00:16:22.692 21:29:43 -- setup/devices.sh@196 -- # declare -a blocks 00:16:22.692 21:29:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:16:22.692 21:29:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:16:22.692 21:29:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:16:22.692 21:29:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:22.692 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:16:22.692 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:16:22.692 21:29:43 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:16:22.692 21:29:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:16:22.692 21:29:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:16:22.692 21:29:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:16:22.692 21:29:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:22.692 No valid GPT data, bailing 00:16:22.692 21:29:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:22.692 21:29:43 -- scripts/common.sh@393 -- # pt= 00:16:22.692 21:29:43 -- scripts/common.sh@394 -- # return 1 00:16:22.692 21:29:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:16:22.692 21:29:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:22.692 21:29:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:22.692 21:29:43 -- setup/common.sh@80 -- # echo 5368709120 00:16:22.692 21:29:43 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:16:22.693 21:29:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:22.693 21:29:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:16:22.693 21:29:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:22.693 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:16:22.693 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:16:22.693 21:29:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:16:22.693 21:29:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:16:22.693 21:29:43 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:16:22.693 21:29:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:22.693 No valid GPT data, bailing 00:16:22.693 21:29:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:22.693 21:29:43 -- scripts/common.sh@393 -- # pt= 00:16:22.693 21:29:43 -- scripts/common.sh@394 -- # return 1 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:16:22.693 21:29:43 -- setup/common.sh@76 -- # local dev=nvme1n1 00:16:22.693 21:29:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:22.693 21:29:43 -- setup/common.sh@80 -- # echo 4294967296 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:16:22.693 21:29:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:22.693 21:29:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:16:22.693 21:29:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:22.693 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:16:22.693 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:16:22.693 21:29:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:16:22.693 21:29:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:16:22.693 21:29:43 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:16:22.693 21:29:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:16:22.693 No valid GPT data, bailing 00:16:22.693 21:29:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:16:22.693 21:29:43 -- scripts/common.sh@393 -- # pt= 00:16:22.693 21:29:43 -- scripts/common.sh@394 -- # return 1 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:16:22.693 21:29:43 -- setup/common.sh@76 -- # local dev=nvme1n2 00:16:22.693 21:29:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:16:22.693 21:29:43 -- setup/common.sh@80 -- # echo 4294967296 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:16:22.693 21:29:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:22.693 21:29:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:16:22.693 21:29:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:22.693 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:16:22.693 21:29:43 -- setup/devices.sh@201 -- # ctrl=nvme1 00:16:22.693 21:29:43 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:16:22.693 21:29:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:16:22.693 21:29:43 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:16:22.693 21:29:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:16:22.693 No valid GPT data, bailing 00:16:22.693 21:29:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:16:22.693 21:29:43 -- scripts/common.sh@393 -- # pt= 00:16:22.693 21:29:43 -- scripts/common.sh@394 -- # return 1 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:16:22.693 21:29:43 -- setup/common.sh@76 -- # local dev=nvme1n3 00:16:22.693 21:29:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:16:22.693 21:29:43 -- setup/common.sh@80 -- # echo 4294967296 00:16:22.693 21:29:43 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:16:22.693 21:29:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:22.693 21:29:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:16:22.693 21:29:43 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:16:22.693 21:29:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:16:22.693 21:29:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:16:22.693 21:29:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:22.693 21:29:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:22.693 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:16:22.693 ************************************ 00:16:22.693 START TEST nvme_mount 00:16:22.693 ************************************ 00:16:22.693 21:29:43 -- common/autotest_common.sh@1104 -- # nvme_mount 00:16:22.693 21:29:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:16:22.693 21:29:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:16:22.693 21:29:43 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:22.693 21:29:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:22.693 21:29:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:16:22.693 21:29:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:16:22.693 21:29:43 -- setup/common.sh@40 -- # local part_no=1 00:16:22.693 21:29:43 -- setup/common.sh@41 -- # local size=1073741824 00:16:22.693 21:29:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:16:22.693 21:29:43 -- setup/common.sh@44 -- # parts=() 00:16:22.693 21:29:43 -- setup/common.sh@44 -- # local parts 00:16:22.693 21:29:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:16:22.693 21:29:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:22.693 21:29:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:22.693 21:29:43 -- setup/common.sh@46 -- # (( part++ )) 00:16:22.693 21:29:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:22.693 21:29:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:16:22.693 21:29:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:16:22.693 21:29:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:16:24.067 Creating new GPT entries in memory. 00:16:24.067 GPT data structures destroyed! You may now partition the disk using fdisk or 00:16:24.067 other utilities. 00:16:24.067 21:29:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:16:24.067 21:29:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:24.067 21:29:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:24.067 21:29:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:24.067 21:29:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:16:25.005 Creating new GPT entries in memory. 00:16:25.005 The operation has completed successfully. 00:16:25.005 21:29:45 -- setup/common.sh@57 -- # (( part++ )) 00:16:25.005 21:29:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:25.005 21:29:45 -- setup/common.sh@62 -- # wait 64255 00:16:25.005 21:29:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.005 21:29:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:16:25.005 21:29:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.005 21:29:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:16:25.005 21:29:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:16:25.005 21:29:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.005 21:29:45 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:25.005 21:29:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:16:25.005 21:29:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:16:25.005 21:29:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.005 21:29:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:25.005 21:29:45 -- setup/devices.sh@53 -- # local found=0 00:16:25.005 21:29:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:25.005 21:29:45 -- setup/devices.sh@56 -- # : 00:16:25.005 21:29:45 -- setup/devices.sh@59 -- # local pci status 00:16:25.005 21:29:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:16:25.005 21:29:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:25.005 21:29:45 -- setup/devices.sh@47 -- # setup output config 00:16:25.005 21:29:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:25.005 21:29:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:25.005 21:29:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:25.005 21:29:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:16:25.005 21:29:45 -- setup/devices.sh@63 -- # found=1 00:16:25.005 21:29:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:25.005 21:29:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:25.005 21:29:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:25.285 21:29:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:25.285 21:29:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:25.543 21:29:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:25.543 21:29:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:25.543 21:29:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:25.543 21:29:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:16:25.543 21:29:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.543 21:29:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:25.543 21:29:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:25.543 21:29:46 -- setup/devices.sh@110 -- # cleanup_nvme 00:16:25.543 21:29:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.543 21:29:46 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.543 21:29:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:25.543 21:29:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:16:25.543 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:25.543 21:29:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:25.543 21:29:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:25.802 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:16:25.802 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:16:25.802 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:25.802 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:25.802 21:29:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:16:25.802 21:29:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:16:25.802 21:29:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.802 21:29:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:16:25.802 21:29:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:16:25.802 21:29:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.802 21:29:46 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:25.802 21:29:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:16:25.802 21:29:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:16:25.802 21:29:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:25.802 21:29:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:25.802 21:29:46 -- setup/devices.sh@53 -- # local found=0 00:16:25.802 21:29:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:25.802 21:29:46 -- setup/devices.sh@56 -- # : 00:16:25.802 21:29:46 -- setup/devices.sh@59 -- # local pci status 00:16:25.802 21:29:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:25.802 21:29:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:16:25.802 21:29:46 -- setup/devices.sh@47 -- # setup output config 00:16:25.802 21:29:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:25.802 21:29:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:26.060 21:29:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:26.060 21:29:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:16:26.060 21:29:46 -- setup/devices.sh@63 -- # found=1 00:16:26.060 21:29:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:26.060 21:29:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:26.060 21:29:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:26.354 21:29:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:26.354 21:29:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:26.354 21:29:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:26.354 21:29:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:26.354 21:29:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:26.354 21:29:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:16:26.354 21:29:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:26.354 21:29:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:26.354 21:29:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:26.354 21:29:47 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:26.354 21:29:47 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:16:26.354 21:29:47 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:16:26.354 21:29:47 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:16:26.354 21:29:47 -- setup/devices.sh@50 -- # local mount_point= 00:16:26.354 21:29:47 -- setup/devices.sh@51 -- # local test_file= 00:16:26.354 21:29:47 -- setup/devices.sh@53 -- # local found=0 00:16:26.354 21:29:47 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:16:26.354 21:29:47 -- setup/devices.sh@59 -- # local pci status 00:16:26.354 21:29:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:26.354 21:29:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:16:26.354 21:29:47 -- setup/devices.sh@47 -- # setup output config 00:16:26.354 21:29:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:26.354 21:29:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:26.619 21:29:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:26.619 21:29:47 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:16:26.619 21:29:47 -- setup/devices.sh@63 -- # found=1 00:16:26.619 21:29:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:26.619 21:29:47 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:26.619 21:29:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:26.877 21:29:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:26.877 21:29:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:27.136 21:29:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:27.136 21:29:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:27.136 21:29:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:27.136 21:29:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:16:27.136 21:29:47 -- setup/devices.sh@68 -- # return 0 00:16:27.136 21:29:47 -- setup/devices.sh@128 -- # cleanup_nvme 00:16:27.136 21:29:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:27.136 21:29:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:27.136 21:29:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:27.136 21:29:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:27.136 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:27.136 00:16:27.136 real 0m4.333s 00:16:27.136 user 0m0.934s 00:16:27.136 sys 0m1.123s 00:16:27.136 21:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.136 21:29:47 -- common/autotest_common.sh@10 -- # set +x 00:16:27.136 ************************************ 00:16:27.136 END TEST nvme_mount 00:16:27.136 ************************************ 00:16:27.136 21:29:47 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:16:27.136 21:29:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:27.136 21:29:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.136 21:29:48 -- common/autotest_common.sh@10 -- # set +x 00:16:27.136 ************************************ 00:16:27.136 START TEST dm_mount 00:16:27.136 ************************************ 00:16:27.136 21:29:48 -- common/autotest_common.sh@1104 -- # dm_mount 00:16:27.136 21:29:48 -- setup/devices.sh@144 -- # pv=nvme0n1 00:16:27.136 21:29:48 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:16:27.136 21:29:48 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:16:27.136 21:29:48 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:16:27.136 21:29:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:16:27.136 21:29:48 -- setup/common.sh@40 -- # local part_no=2 00:16:27.136 21:29:48 -- setup/common.sh@41 -- # local size=1073741824 00:16:27.136 21:29:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:16:27.136 21:29:48 -- setup/common.sh@44 -- # parts=() 00:16:27.136 21:29:48 -- setup/common.sh@44 -- # local parts 00:16:27.136 21:29:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:16:27.136 21:29:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:27.136 21:29:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:27.136 21:29:48 -- setup/common.sh@46 -- # (( part++ )) 00:16:27.136 21:29:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:27.136 21:29:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:27.136 21:29:48 -- setup/common.sh@46 -- # (( part++ )) 00:16:27.136 21:29:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:27.136 21:29:48 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:16:27.136 21:29:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:16:27.136 21:29:48 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:16:28.508 Creating new GPT entries in memory. 00:16:28.508 GPT data structures destroyed! You may now partition the disk using fdisk or 00:16:28.508 other utilities. 00:16:28.508 21:29:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:16:28.508 21:29:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:28.508 21:29:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:28.508 21:29:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:28.508 21:29:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:16:29.440 Creating new GPT entries in memory. 00:16:29.440 The operation has completed successfully. 00:16:29.441 21:29:50 -- setup/common.sh@57 -- # (( part++ )) 00:16:29.441 21:29:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:29.441 21:29:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:29.441 21:29:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:29.441 21:29:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:16:30.375 The operation has completed successfully. 00:16:30.375 21:29:51 -- setup/common.sh@57 -- # (( part++ )) 00:16:30.375 21:29:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:30.375 21:29:51 -- setup/common.sh@62 -- # wait 64709 00:16:30.375 21:29:51 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:16:30.375 21:29:51 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:30.375 21:29:51 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:30.375 21:29:51 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:16:30.375 21:29:51 -- setup/devices.sh@160 -- # for t in {1..5} 00:16:30.375 21:29:51 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:30.375 21:29:51 -- setup/devices.sh@161 -- # break 00:16:30.375 21:29:51 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:30.375 21:29:51 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:16:30.375 21:29:51 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:16:30.375 21:29:51 -- setup/devices.sh@166 -- # dm=dm-0 00:16:30.375 21:29:51 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:16:30.375 21:29:51 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:16:30.375 21:29:51 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:30.375 21:29:51 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:16:30.375 21:29:51 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:30.375 21:29:51 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:30.375 21:29:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:16:30.375 21:29:51 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:30.375 21:29:51 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:30.375 21:29:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:16:30.375 21:29:51 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:16:30.375 21:29:51 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:30.375 21:29:51 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:30.375 21:29:51 -- setup/devices.sh@53 -- # local found=0 00:16:30.375 21:29:51 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:16:30.375 21:29:51 -- setup/devices.sh@56 -- # : 00:16:30.375 21:29:51 -- setup/devices.sh@59 -- # local pci status 00:16:30.375 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:30.375 21:29:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:16:30.375 21:29:51 -- setup/devices.sh@47 -- # setup output config 00:16:30.375 21:29:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:30.375 21:29:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:30.375 21:29:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:30.375 21:29:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:16:30.375 21:29:51 -- setup/devices.sh@63 -- # found=1 00:16:30.375 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:30.375 21:29:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:30.375 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:30.678 21:29:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:30.678 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:30.942 21:29:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:30.942 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:30.942 21:29:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:30.942 21:29:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:16:30.942 21:29:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:30.942 21:29:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:16:30.942 21:29:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:30.942 21:29:51 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:30.942 21:29:51 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:16:30.942 21:29:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:16:30.942 21:29:51 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:16:30.942 21:29:51 -- setup/devices.sh@50 -- # local mount_point= 00:16:30.942 21:29:51 -- setup/devices.sh@51 -- # local test_file= 00:16:30.942 21:29:51 -- setup/devices.sh@53 -- # local found=0 00:16:30.942 21:29:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:16:30.942 21:29:51 -- setup/devices.sh@59 -- # local pci status 00:16:30.942 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:30.942 21:29:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:16:30.942 21:29:51 -- setup/devices.sh@47 -- # setup output config 00:16:30.942 21:29:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:30.942 21:29:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:31.199 21:29:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:31.199 21:29:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:16:31.199 21:29:51 -- setup/devices.sh@63 -- # found=1 00:16:31.199 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:31.199 21:29:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:31.199 21:29:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:31.457 21:29:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:31.457 21:29:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:31.457 21:29:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:16:31.457 21:29:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:31.716 21:29:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:31.716 21:29:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:16:31.716 21:29:52 -- setup/devices.sh@68 -- # return 0 00:16:31.716 21:29:52 -- setup/devices.sh@187 -- # cleanup_dm 00:16:31.716 21:29:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:31.716 21:29:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:31.716 21:29:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:16:31.716 21:29:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:31.716 21:29:52 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:16:31.716 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:31.716 21:29:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:31.716 21:29:52 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:16:31.716 00:16:31.716 real 0m4.458s 00:16:31.716 user 0m0.643s 00:16:31.716 sys 0m0.756s 00:16:31.716 21:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.716 21:29:52 -- common/autotest_common.sh@10 -- # set +x 00:16:31.716 ************************************ 00:16:31.716 END TEST dm_mount 00:16:31.716 ************************************ 00:16:31.716 21:29:52 -- setup/devices.sh@1 -- # cleanup 00:16:31.716 21:29:52 -- setup/devices.sh@11 -- # cleanup_nvme 00:16:31.716 21:29:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:31.716 21:29:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:31.716 21:29:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:16:31.716 21:29:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:31.716 21:29:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:31.974 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:16:31.974 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:16:31.974 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:31.974 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:31.974 21:29:52 -- setup/devices.sh@12 -- # cleanup_dm 00:16:31.974 21:29:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:31.974 21:29:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:31.974 21:29:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:31.974 21:29:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:31.974 21:29:52 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:16:31.974 21:29:52 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:16:31.974 00:16:31.974 real 0m10.257s 00:16:31.974 user 0m2.189s 00:16:31.974 sys 0m2.454s 00:16:31.974 21:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.974 ************************************ 00:16:31.974 END TEST devices 00:16:31.974 ************************************ 00:16:31.974 21:29:52 -- common/autotest_common.sh@10 -- # set +x 00:16:31.974 ************************************ 00:16:31.974 END TEST setup.sh 00:16:31.974 ************************************ 00:16:31.974 00:16:31.974 real 0m21.368s 00:16:31.974 user 0m6.872s 00:16:31.974 sys 0m8.746s 00:16:31.974 21:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.974 21:29:52 -- common/autotest_common.sh@10 -- # set +x 00:16:31.974 21:29:52 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:32.232 Hugepages 00:16:32.232 node hugesize free / total 00:16:32.232 node0 1048576kB 0 / 0 00:16:32.232 node0 2048kB 2048 / 2048 00:16:32.232 00:16:32.232 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:32.232 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:32.232 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:16:32.489 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:16:32.489 21:29:53 -- spdk/autotest.sh@141 -- # uname -s 00:16:32.489 21:29:53 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:16:32.489 21:29:53 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:16:32.489 21:29:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:33.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:33.056 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:16:33.056 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:16:33.314 21:29:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:16:34.247 21:29:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:16:34.247 21:29:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:16:34.247 21:29:55 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:16:34.247 21:29:55 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:16:34.247 21:29:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:34.247 21:29:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:16:34.247 21:29:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:34.247 21:29:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:34.247 21:29:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:34.247 21:29:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:16:34.247 21:29:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:16:34.247 21:29:55 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:34.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.505 Waiting for block devices as requested 00:16:34.761 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:16:34.761 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:16:34.761 21:29:55 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:16:34.761 21:29:55 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:16:34.761 21:29:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:16:34.761 21:29:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:34.761 21:29:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:16:34.761 21:29:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:16:34.761 21:29:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:16:34.761 21:29:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:16:34.761 21:29:55 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:16:34.762 21:29:55 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # grep oacs 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:16:34.762 21:29:55 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:16:34.762 21:29:55 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:16:34.762 21:29:55 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:16:34.762 21:29:55 -- common/autotest_common.sh@1542 -- # continue 00:16:34.762 21:29:55 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:16:34.762 21:29:55 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:16:34.762 21:29:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:34.762 21:29:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:16:34.762 21:29:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:16:34.762 21:29:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:16:34.762 21:29:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:16:34.762 21:29:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:16:34.762 21:29:55 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:16:34.762 21:29:55 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # grep oacs 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:16:34.762 21:29:55 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:16:34.762 21:29:55 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:16:34.762 21:29:55 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:16:34.762 21:29:55 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:16:34.762 21:29:55 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:16:34.762 21:29:55 -- common/autotest_common.sh@1542 -- # continue 00:16:34.762 21:29:55 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:16:34.762 21:29:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:34.762 21:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:35.020 21:29:55 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:16:35.020 21:29:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:35.020 21:29:55 -- common/autotest_common.sh@10 -- # set +x 00:16:35.020 21:29:55 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:35.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:35.586 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.586 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.845 21:29:56 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:16:35.845 21:29:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:35.845 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.845 21:29:56 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:16:35.845 21:29:56 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:16:35.845 21:29:56 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:16:35.845 21:29:56 -- common/autotest_common.sh@1562 -- # bdfs=() 00:16:35.845 21:29:56 -- common/autotest_common.sh@1562 -- # local bdfs 00:16:35.845 21:29:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:16:35.845 21:29:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:35.845 21:29:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:16:35.845 21:29:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:35.845 21:29:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:35.845 21:29:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:35.845 21:29:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:16:35.845 21:29:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:16:35.845 21:29:56 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:16:35.845 21:29:56 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:16:35.845 21:29:56 -- common/autotest_common.sh@1565 -- # device=0x0010 00:16:35.845 21:29:56 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:35.845 21:29:56 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:16:35.845 21:29:56 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:16:35.845 21:29:56 -- common/autotest_common.sh@1565 -- # device=0x0010 00:16:35.845 21:29:56 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:35.845 21:29:56 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:16:35.845 21:29:56 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:16:35.845 21:29:56 -- common/autotest_common.sh@1578 -- # return 0 00:16:35.845 21:29:56 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:16:35.845 21:29:56 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:16:35.845 21:29:56 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:16:35.845 21:29:56 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:16:35.845 21:29:56 -- spdk/autotest.sh@173 -- # timing_enter lib 00:16:35.845 21:29:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:35.845 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.845 21:29:56 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:35.845 21:29:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:35.845 21:29:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:35.845 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.845 ************************************ 00:16:35.845 START TEST env 00:16:35.845 ************************************ 00:16:35.845 21:29:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:35.845 * Looking for test storage... 00:16:35.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:16:35.845 21:29:56 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:35.845 21:29:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:35.845 21:29:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:35.845 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.845 ************************************ 00:16:35.845 START TEST env_memory 00:16:35.845 ************************************ 00:16:35.845 21:29:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:35.845 00:16:35.845 00:16:35.845 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.845 http://cunit.sourceforge.net/ 00:16:35.845 00:16:35.845 00:16:35.845 Suite: memory 00:16:36.104 Test: alloc and free memory map ...[2024-07-11 21:29:56.817910] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:16:36.104 passed 00:16:36.104 Test: mem map translation ...[2024-07-11 21:29:56.842695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:16:36.104 [2024-07-11 21:29:56.842865] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:16:36.104 [2024-07-11 21:29:56.843021] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:16:36.104 [2024-07-11 21:29:56.843234] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:16:36.104 passed 00:16:36.104 Test: mem map registration ...[2024-07-11 21:29:56.893788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:16:36.104 [2024-07-11 21:29:56.893947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:16:36.104 passed 00:16:36.104 Test: mem map adjacent registrations ...passed 00:16:36.104 00:16:36.104 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.104 suites 1 1 n/a 0 0 00:16:36.104 tests 4 4 4 0 0 00:16:36.104 asserts 152 152 152 0 n/a 00:16:36.104 00:16:36.104 Elapsed time = 0.169 seconds 00:16:36.104 00:16:36.104 real 0m0.184s 00:16:36.104 user 0m0.173s 00:16:36.104 sys 0m0.008s 00:16:36.104 21:29:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.104 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:36.104 ************************************ 00:16:36.104 END TEST env_memory 00:16:36.104 ************************************ 00:16:36.104 21:29:57 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:36.104 21:29:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:36.104 21:29:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:36.104 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:16:36.104 ************************************ 00:16:36.104 START TEST env_vtophys 00:16:36.104 ************************************ 00:16:36.104 21:29:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:36.104 EAL: lib.eal log level changed from notice to debug 00:16:36.104 EAL: Detected lcore 0 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 1 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 2 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 3 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 4 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 5 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 6 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 7 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 8 as core 0 on socket 0 00:16:36.104 EAL: Detected lcore 9 as core 0 on socket 0 00:16:36.104 EAL: Maximum logical cores by configuration: 128 00:16:36.104 EAL: Detected CPU lcores: 10 00:16:36.104 EAL: Detected NUMA nodes: 1 00:16:36.104 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:16:36.104 EAL: Detected shared linkage of DPDK 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:16:36.104 EAL: Registered [vdev] bus. 00:16:36.104 EAL: bus.vdev log level changed from disabled to notice 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:16:36.104 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:16:36.104 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:16:36.104 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:16:36.104 EAL: No shared files mode enabled, IPC will be disabled 00:16:36.104 EAL: No shared files mode enabled, IPC is disabled 00:16:36.104 EAL: Selected IOVA mode 'PA' 00:16:36.104 EAL: Probing VFIO support... 00:16:36.104 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:36.104 EAL: VFIO modules not loaded, skipping VFIO support... 00:16:36.104 EAL: Ask a virtual area of 0x2e000 bytes 00:16:36.104 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:16:36.104 EAL: Setting up physically contiguous memory... 00:16:36.104 EAL: Setting maximum number of open files to 524288 00:16:36.104 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:16:36.104 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:16:36.104 EAL: Ask a virtual area of 0x61000 bytes 00:16:36.104 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:16:36.104 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:36.104 EAL: Ask a virtual area of 0x400000000 bytes 00:16:36.104 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:16:36.104 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:16:36.104 EAL: Ask a virtual area of 0x61000 bytes 00:16:36.104 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:16:36.104 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:36.104 EAL: Ask a virtual area of 0x400000000 bytes 00:16:36.104 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:16:36.104 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:16:36.104 EAL: Ask a virtual area of 0x61000 bytes 00:16:36.104 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:16:36.104 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:36.104 EAL: Ask a virtual area of 0x400000000 bytes 00:16:36.104 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:16:36.104 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:16:36.104 EAL: Ask a virtual area of 0x61000 bytes 00:16:36.104 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:16:36.104 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:36.104 EAL: Ask a virtual area of 0x400000000 bytes 00:16:36.104 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:16:36.104 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:16:36.104 EAL: Hugepages will be freed exactly as allocated. 00:16:36.104 EAL: No shared files mode enabled, IPC is disabled 00:16:36.104 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: TSC frequency is ~2200000 KHz 00:16:36.362 EAL: Main lcore 0 is ready (tid=7faf93832a00;cpuset=[0]) 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 0 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 2MB 00:16:36.362 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: No PCI address specified using 'addr=' in: bus=pci 00:16:36.362 EAL: Mem event callback 'spdk:(nil)' registered 00:16:36.362 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:16:36.362 00:16:36.362 00:16:36.362 CUnit - A unit testing framework for C - Version 2.1-3 00:16:36.362 http://cunit.sourceforge.net/ 00:16:36.362 00:16:36.362 00:16:36.362 Suite: components_suite 00:16:36.362 Test: vtophys_malloc_test ...passed 00:16:36.362 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 4 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 4MB 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was shrunk by 4MB 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 4 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 6MB 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was shrunk by 6MB 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 4 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 10MB 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was shrunk by 10MB 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 4 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 18MB 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was shrunk by 18MB 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 4 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 34MB 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was shrunk by 34MB 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 4 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 66MB 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was shrunk by 66MB 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.362 EAL: Restoring previous memory policy: 4 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was expanded by 130MB 00:16:36.362 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.362 EAL: request: mp_malloc_sync 00:16:36.362 EAL: No shared files mode enabled, IPC is disabled 00:16:36.362 EAL: Heap on socket 0 was shrunk by 130MB 00:16:36.362 EAL: Trying to obtain current memory policy. 00:16:36.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.620 EAL: Restoring previous memory policy: 4 00:16:36.620 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.620 EAL: request: mp_malloc_sync 00:16:36.620 EAL: No shared files mode enabled, IPC is disabled 00:16:36.620 EAL: Heap on socket 0 was expanded by 258MB 00:16:36.620 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.620 EAL: request: mp_malloc_sync 00:16:36.620 EAL: No shared files mode enabled, IPC is disabled 00:16:36.620 EAL: Heap on socket 0 was shrunk by 258MB 00:16:36.620 EAL: Trying to obtain current memory policy. 00:16:36.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:36.879 EAL: Restoring previous memory policy: 4 00:16:36.879 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.879 EAL: request: mp_malloc_sync 00:16:36.879 EAL: No shared files mode enabled, IPC is disabled 00:16:36.879 EAL: Heap on socket 0 was expanded by 514MB 00:16:36.879 EAL: Calling mem event callback 'spdk:(nil)' 00:16:36.879 EAL: request: mp_malloc_sync 00:16:36.879 EAL: No shared files mode enabled, IPC is disabled 00:16:36.879 EAL: Heap on socket 0 was shrunk by 514MB 00:16:36.879 EAL: Trying to obtain current memory policy. 00:16:36.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:37.137 EAL: Restoring previous memory policy: 4 00:16:37.137 EAL: Calling mem event callback 'spdk:(nil)' 00:16:37.137 EAL: request: mp_malloc_sync 00:16:37.137 EAL: No shared files mode enabled, IPC is disabled 00:16:37.137 EAL: Heap on socket 0 was expanded by 1026MB 00:16:37.395 EAL: Calling mem event callback 'spdk:(nil)' 00:16:37.653 passed 00:16:37.653 00:16:37.653 Run Summary: Type Total Ran Passed Failed Inactive 00:16:37.653 suites 1 1 n/a 0 0 00:16:37.653 tests 2 2 2 0 0 00:16:37.653 asserts 5225 5225 5225 0 n/a 00:16:37.653 00:16:37.653 Elapsed time = 1.265 seconds 00:16:37.653 EAL: request: mp_malloc_sync 00:16:37.653 EAL: No shared files mode enabled, IPC is disabled 00:16:37.653 EAL: Heap on socket 0 was shrunk by 1026MB 00:16:37.653 EAL: Calling mem event callback 'spdk:(nil)' 00:16:37.653 EAL: request: mp_malloc_sync 00:16:37.653 EAL: No shared files mode enabled, IPC is disabled 00:16:37.653 EAL: Heap on socket 0 was shrunk by 2MB 00:16:37.653 EAL: No shared files mode enabled, IPC is disabled 00:16:37.653 EAL: No shared files mode enabled, IPC is disabled 00:16:37.653 EAL: No shared files mode enabled, IPC is disabled 00:16:37.653 00:16:37.653 real 0m1.456s 00:16:37.653 user 0m0.799s 00:16:37.653 sys 0m0.522s 00:16:37.653 21:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.653 ************************************ 00:16:37.653 END TEST env_vtophys 00:16:37.653 ************************************ 00:16:37.653 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.653 21:29:58 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:37.653 21:29:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:37.653 21:29:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:37.653 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.653 ************************************ 00:16:37.653 START TEST env_pci 00:16:37.653 ************************************ 00:16:37.653 21:29:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:37.653 00:16:37.653 00:16:37.653 CUnit - A unit testing framework for C - Version 2.1-3 00:16:37.653 http://cunit.sourceforge.net/ 00:16:37.653 00:16:37.653 00:16:37.653 Suite: pci 00:16:37.653 Test: pci_hook ...[2024-07-11 21:29:58.531912] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65834 has claimed it 00:16:37.653 passed 00:16:37.653 00:16:37.653 Run Summary: Type Total Ran Passed Failed Inactive 00:16:37.653 suites 1 1 n/a 0 0 00:16:37.653 tests 1 1 1 0 0 00:16:37.653 asserts 25 25 25 0 n/a 00:16:37.653 00:16:37.653 Elapsed time = 0.003 seconds 00:16:37.653 EAL: Cannot find device (10000:00:01.0) 00:16:37.653 EAL: Failed to attach device on primary process 00:16:37.653 ************************************ 00:16:37.653 END TEST env_pci 00:16:37.653 ************************************ 00:16:37.653 00:16:37.653 real 0m0.023s 00:16:37.653 user 0m0.013s 00:16:37.653 sys 0m0.009s 00:16:37.653 21:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.653 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.653 21:29:58 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:16:37.653 21:29:58 -- env/env.sh@15 -- # uname 00:16:37.653 21:29:58 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:16:37.653 21:29:58 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:16:37.653 21:29:58 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:37.653 21:29:58 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:37.653 21:29:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:37.653 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.653 ************************************ 00:16:37.653 START TEST env_dpdk_post_init 00:16:37.653 ************************************ 00:16:37.653 21:29:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:37.912 EAL: Detected CPU lcores: 10 00:16:37.912 EAL: Detected NUMA nodes: 1 00:16:37.912 EAL: Detected shared linkage of DPDK 00:16:37.912 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:37.912 EAL: Selected IOVA mode 'PA' 00:16:37.912 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:37.912 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:16:37.912 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:16:37.912 Starting DPDK initialization... 00:16:37.912 Starting SPDK post initialization... 00:16:37.912 SPDK NVMe probe 00:16:37.912 Attaching to 0000:00:06.0 00:16:37.912 Attaching to 0000:00:07.0 00:16:37.912 Attached to 0000:00:06.0 00:16:37.912 Attached to 0000:00:07.0 00:16:37.912 Cleaning up... 00:16:37.912 00:16:37.912 real 0m0.192s 00:16:37.912 user 0m0.051s 00:16:37.912 sys 0m0.037s 00:16:37.912 21:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.912 ************************************ 00:16:37.912 END TEST env_dpdk_post_init 00:16:37.912 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.912 ************************************ 00:16:37.912 21:29:58 -- env/env.sh@26 -- # uname 00:16:37.912 21:29:58 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:16:37.912 21:29:58 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:37.912 21:29:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:37.912 21:29:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:37.912 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.912 ************************************ 00:16:37.912 START TEST env_mem_callbacks 00:16:37.912 ************************************ 00:16:37.912 21:29:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:38.170 EAL: Detected CPU lcores: 10 00:16:38.170 EAL: Detected NUMA nodes: 1 00:16:38.170 EAL: Detected shared linkage of DPDK 00:16:38.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:38.170 EAL: Selected IOVA mode 'PA' 00:16:38.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:38.170 00:16:38.170 00:16:38.170 CUnit - A unit testing framework for C - Version 2.1-3 00:16:38.170 http://cunit.sourceforge.net/ 00:16:38.170 00:16:38.170 00:16:38.170 Suite: memory 00:16:38.170 Test: test ... 00:16:38.170 register 0x200000200000 2097152 00:16:38.170 malloc 3145728 00:16:38.170 register 0x200000400000 4194304 00:16:38.170 buf 0x200000500000 len 3145728 PASSED 00:16:38.170 malloc 64 00:16:38.170 buf 0x2000004fff40 len 64 PASSED 00:16:38.170 malloc 4194304 00:16:38.170 register 0x200000800000 6291456 00:16:38.170 buf 0x200000a00000 len 4194304 PASSED 00:16:38.170 free 0x200000500000 3145728 00:16:38.170 free 0x2000004fff40 64 00:16:38.170 unregister 0x200000400000 4194304 PASSED 00:16:38.170 free 0x200000a00000 4194304 00:16:38.170 unregister 0x200000800000 6291456 PASSED 00:16:38.170 malloc 8388608 00:16:38.170 register 0x200000400000 10485760 00:16:38.170 buf 0x200000600000 len 8388608 PASSED 00:16:38.170 free 0x200000600000 8388608 00:16:38.170 unregister 0x200000400000 10485760 PASSED 00:16:38.170 passed 00:16:38.170 00:16:38.170 Run Summary: Type Total Ran Passed Failed Inactive 00:16:38.170 suites 1 1 n/a 0 0 00:16:38.170 tests 1 1 1 0 0 00:16:38.170 asserts 15 15 15 0 n/a 00:16:38.170 00:16:38.170 Elapsed time = 0.007 seconds 00:16:38.170 00:16:38.170 real 0m0.144s 00:16:38.170 user 0m0.017s 00:16:38.170 sys 0m0.026s 00:16:38.170 21:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.170 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.170 ************************************ 00:16:38.170 END TEST env_mem_callbacks 00:16:38.170 ************************************ 00:16:38.170 00:16:38.170 real 0m2.334s 00:16:38.170 user 0m1.159s 00:16:38.170 sys 0m0.821s 00:16:38.170 21:29:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.170 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:16:38.170 ************************************ 00:16:38.170 END TEST env 00:16:38.170 ************************************ 00:16:38.170 21:29:59 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:38.170 21:29:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:38.170 21:29:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:38.170 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:16:38.170 ************************************ 00:16:38.170 START TEST rpc 00:16:38.170 ************************************ 00:16:38.170 21:29:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:38.428 * Looking for test storage... 00:16:38.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:16:38.428 21:29:59 -- rpc/rpc.sh@65 -- # spdk_pid=65948 00:16:38.428 21:29:59 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:38.428 21:29:59 -- rpc/rpc.sh@67 -- # waitforlisten 65948 00:16:38.428 21:29:59 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:16:38.428 21:29:59 -- common/autotest_common.sh@819 -- # '[' -z 65948 ']' 00:16:38.428 21:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.428 21:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:38.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.428 21:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.428 21:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:38.428 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:16:38.428 [2024-07-11 21:29:59.221575] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:38.428 [2024-07-11 21:29:59.221691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65948 ] 00:16:38.428 [2024-07-11 21:29:59.358989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.722 [2024-07-11 21:29:59.456048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:38.722 [2024-07-11 21:29:59.456219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:16:38.722 [2024-07-11 21:29:59.456240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65948' to capture a snapshot of events at runtime. 00:16:38.722 [2024-07-11 21:29:59.456249] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65948 for offline analysis/debug. 00:16:38.722 [2024-07-11 21:29:59.456286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.287 21:30:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:39.287 21:30:00 -- common/autotest_common.sh@852 -- # return 0 00:16:39.287 21:30:00 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:39.287 21:30:00 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:39.287 21:30:00 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:16:39.287 21:30:00 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:16:39.287 21:30:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:39.287 21:30:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.287 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.287 ************************************ 00:16:39.287 START TEST rpc_integrity 00:16:39.287 ************************************ 00:16:39.287 21:30:00 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:16:39.287 21:30:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:39.287 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.287 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.287 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.287 21:30:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:16:39.287 21:30:00 -- rpc/rpc.sh@13 -- # jq length 00:16:39.546 21:30:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:16:39.546 21:30:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:16:39.546 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.546 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.546 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.546 21:30:00 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:16:39.546 21:30:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:16:39.546 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.546 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.546 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.546 21:30:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:16:39.546 { 00:16:39.546 "name": "Malloc0", 00:16:39.546 "aliases": [ 00:16:39.546 "196db29a-f0ca-4b29-bd27-c0817029fff3" 00:16:39.546 ], 00:16:39.546 "product_name": "Malloc disk", 00:16:39.546 "block_size": 512, 00:16:39.546 "num_blocks": 16384, 00:16:39.546 "uuid": "196db29a-f0ca-4b29-bd27-c0817029fff3", 00:16:39.546 "assigned_rate_limits": { 00:16:39.546 "rw_ios_per_sec": 0, 00:16:39.546 "rw_mbytes_per_sec": 0, 00:16:39.546 "r_mbytes_per_sec": 0, 00:16:39.546 "w_mbytes_per_sec": 0 00:16:39.546 }, 00:16:39.546 "claimed": false, 00:16:39.546 "zoned": false, 00:16:39.546 "supported_io_types": { 00:16:39.546 "read": true, 00:16:39.546 "write": true, 00:16:39.546 "unmap": true, 00:16:39.546 "write_zeroes": true, 00:16:39.546 "flush": true, 00:16:39.546 "reset": true, 00:16:39.546 "compare": false, 00:16:39.546 "compare_and_write": false, 00:16:39.546 "abort": true, 00:16:39.546 "nvme_admin": false, 00:16:39.546 "nvme_io": false 00:16:39.546 }, 00:16:39.546 "memory_domains": [ 00:16:39.546 { 00:16:39.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.546 "dma_device_type": 2 00:16:39.546 } 00:16:39.546 ], 00:16:39.546 "driver_specific": {} 00:16:39.546 } 00:16:39.546 ]' 00:16:39.546 21:30:00 -- rpc/rpc.sh@17 -- # jq length 00:16:39.546 21:30:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:16:39.546 21:30:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:16:39.546 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.546 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.546 [2024-07-11 21:30:00.351438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:16:39.546 [2024-07-11 21:30:00.351502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.546 [2024-07-11 21:30:00.351524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x242e580 00:16:39.546 [2024-07-11 21:30:00.351535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.546 [2024-07-11 21:30:00.353307] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.546 [2024-07-11 21:30:00.353338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:16:39.546 Passthru0 00:16:39.546 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.546 21:30:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:16:39.546 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.546 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.546 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.546 21:30:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:16:39.547 { 00:16:39.547 "name": "Malloc0", 00:16:39.547 "aliases": [ 00:16:39.547 "196db29a-f0ca-4b29-bd27-c0817029fff3" 00:16:39.547 ], 00:16:39.547 "product_name": "Malloc disk", 00:16:39.547 "block_size": 512, 00:16:39.547 "num_blocks": 16384, 00:16:39.547 "uuid": "196db29a-f0ca-4b29-bd27-c0817029fff3", 00:16:39.547 "assigned_rate_limits": { 00:16:39.547 "rw_ios_per_sec": 0, 00:16:39.547 "rw_mbytes_per_sec": 0, 00:16:39.547 "r_mbytes_per_sec": 0, 00:16:39.547 "w_mbytes_per_sec": 0 00:16:39.547 }, 00:16:39.547 "claimed": true, 00:16:39.547 "claim_type": "exclusive_write", 00:16:39.547 "zoned": false, 00:16:39.547 "supported_io_types": { 00:16:39.547 "read": true, 00:16:39.547 "write": true, 00:16:39.547 "unmap": true, 00:16:39.547 "write_zeroes": true, 00:16:39.547 "flush": true, 00:16:39.547 "reset": true, 00:16:39.547 "compare": false, 00:16:39.547 "compare_and_write": false, 00:16:39.547 "abort": true, 00:16:39.547 "nvme_admin": false, 00:16:39.547 "nvme_io": false 00:16:39.547 }, 00:16:39.547 "memory_domains": [ 00:16:39.547 { 00:16:39.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.547 "dma_device_type": 2 00:16:39.547 } 00:16:39.547 ], 00:16:39.547 "driver_specific": {} 00:16:39.547 }, 00:16:39.547 { 00:16:39.547 "name": "Passthru0", 00:16:39.547 "aliases": [ 00:16:39.547 "97659a5d-3590-554d-a489-3071b0ca048b" 00:16:39.547 ], 00:16:39.547 "product_name": "passthru", 00:16:39.547 "block_size": 512, 00:16:39.547 "num_blocks": 16384, 00:16:39.547 "uuid": "97659a5d-3590-554d-a489-3071b0ca048b", 00:16:39.547 "assigned_rate_limits": { 00:16:39.547 "rw_ios_per_sec": 0, 00:16:39.547 "rw_mbytes_per_sec": 0, 00:16:39.547 "r_mbytes_per_sec": 0, 00:16:39.547 "w_mbytes_per_sec": 0 00:16:39.547 }, 00:16:39.547 "claimed": false, 00:16:39.547 "zoned": false, 00:16:39.547 "supported_io_types": { 00:16:39.547 "read": true, 00:16:39.547 "write": true, 00:16:39.547 "unmap": true, 00:16:39.547 "write_zeroes": true, 00:16:39.547 "flush": true, 00:16:39.547 "reset": true, 00:16:39.547 "compare": false, 00:16:39.547 "compare_and_write": false, 00:16:39.547 "abort": true, 00:16:39.547 "nvme_admin": false, 00:16:39.547 "nvme_io": false 00:16:39.547 }, 00:16:39.547 "memory_domains": [ 00:16:39.547 { 00:16:39.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.547 "dma_device_type": 2 00:16:39.547 } 00:16:39.547 ], 00:16:39.547 "driver_specific": { 00:16:39.547 "passthru": { 00:16:39.547 "name": "Passthru0", 00:16:39.547 "base_bdev_name": "Malloc0" 00:16:39.547 } 00:16:39.547 } 00:16:39.547 } 00:16:39.547 ]' 00:16:39.547 21:30:00 -- rpc/rpc.sh@21 -- # jq length 00:16:39.547 21:30:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:16:39.547 21:30:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:16:39.547 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.547 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.547 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.547 21:30:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.547 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.547 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.547 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.547 21:30:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:39.547 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.547 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.547 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.547 21:30:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:16:39.547 21:30:00 -- rpc/rpc.sh@26 -- # jq length 00:16:39.805 21:30:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:16:39.805 00:16:39.805 real 0m0.310s 00:16:39.805 user 0m0.206s 00:16:39.805 sys 0m0.034s 00:16:39.805 21:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.805 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 ************************************ 00:16:39.805 END TEST rpc_integrity 00:16:39.805 ************************************ 00:16:39.805 21:30:00 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:16:39.805 21:30:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:39.805 21:30:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.805 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 ************************************ 00:16:39.805 START TEST rpc_plugins 00:16:39.805 ************************************ 00:16:39.805 21:30:00 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:16:39.805 21:30:00 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:16:39.805 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.805 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.805 21:30:00 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:16:39.805 21:30:00 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:16:39.805 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.805 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.805 21:30:00 -- rpc/rpc.sh@31 -- # bdevs='[ 00:16:39.805 { 00:16:39.805 "name": "Malloc1", 00:16:39.805 "aliases": [ 00:16:39.805 "e5d191b1-dd1f-4680-b7c9-0382de578c88" 00:16:39.805 ], 00:16:39.805 "product_name": "Malloc disk", 00:16:39.805 "block_size": 4096, 00:16:39.805 "num_blocks": 256, 00:16:39.805 "uuid": "e5d191b1-dd1f-4680-b7c9-0382de578c88", 00:16:39.805 "assigned_rate_limits": { 00:16:39.805 "rw_ios_per_sec": 0, 00:16:39.805 "rw_mbytes_per_sec": 0, 00:16:39.805 "r_mbytes_per_sec": 0, 00:16:39.805 "w_mbytes_per_sec": 0 00:16:39.805 }, 00:16:39.805 "claimed": false, 00:16:39.805 "zoned": false, 00:16:39.805 "supported_io_types": { 00:16:39.805 "read": true, 00:16:39.805 "write": true, 00:16:39.805 "unmap": true, 00:16:39.805 "write_zeroes": true, 00:16:39.805 "flush": true, 00:16:39.805 "reset": true, 00:16:39.805 "compare": false, 00:16:39.805 "compare_and_write": false, 00:16:39.805 "abort": true, 00:16:39.805 "nvme_admin": false, 00:16:39.805 "nvme_io": false 00:16:39.805 }, 00:16:39.805 "memory_domains": [ 00:16:39.805 { 00:16:39.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.805 "dma_device_type": 2 00:16:39.805 } 00:16:39.805 ], 00:16:39.805 "driver_specific": {} 00:16:39.805 } 00:16:39.805 ]' 00:16:39.805 21:30:00 -- rpc/rpc.sh@32 -- # jq length 00:16:39.805 21:30:00 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:16:39.805 21:30:00 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:16:39.805 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.805 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.805 21:30:00 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:16:39.805 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.805 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.805 21:30:00 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:16:39.805 21:30:00 -- rpc/rpc.sh@36 -- # jq length 00:16:39.805 21:30:00 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:16:39.805 00:16:39.805 real 0m0.163s 00:16:39.805 user 0m0.103s 00:16:39.805 sys 0m0.022s 00:16:39.805 21:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.805 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 ************************************ 00:16:39.805 END TEST rpc_plugins 00:16:39.805 ************************************ 00:16:40.064 21:30:00 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:16:40.064 21:30:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:40.064 21:30:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:40.064 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:40.064 ************************************ 00:16:40.064 START TEST rpc_trace_cmd_test 00:16:40.064 ************************************ 00:16:40.064 21:30:00 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:16:40.064 21:30:00 -- rpc/rpc.sh@40 -- # local info 00:16:40.064 21:30:00 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:16:40.064 21:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.064 21:30:00 -- common/autotest_common.sh@10 -- # set +x 00:16:40.064 21:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.064 21:30:00 -- rpc/rpc.sh@42 -- # info='{ 00:16:40.064 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65948", 00:16:40.064 "tpoint_group_mask": "0x8", 00:16:40.064 "iscsi_conn": { 00:16:40.064 "mask": "0x2", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "scsi": { 00:16:40.064 "mask": "0x4", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "bdev": { 00:16:40.064 "mask": "0x8", 00:16:40.064 "tpoint_mask": "0xffffffffffffffff" 00:16:40.064 }, 00:16:40.064 "nvmf_rdma": { 00:16:40.064 "mask": "0x10", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "nvmf_tcp": { 00:16:40.064 "mask": "0x20", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "ftl": { 00:16:40.064 "mask": "0x40", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "blobfs": { 00:16:40.064 "mask": "0x80", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "dsa": { 00:16:40.064 "mask": "0x200", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "thread": { 00:16:40.064 "mask": "0x400", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "nvme_pcie": { 00:16:40.064 "mask": "0x800", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "iaa": { 00:16:40.064 "mask": "0x1000", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "nvme_tcp": { 00:16:40.064 "mask": "0x2000", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 }, 00:16:40.064 "bdev_nvme": { 00:16:40.064 "mask": "0x4000", 00:16:40.064 "tpoint_mask": "0x0" 00:16:40.064 } 00:16:40.064 }' 00:16:40.064 21:30:00 -- rpc/rpc.sh@43 -- # jq length 00:16:40.064 21:30:00 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:16:40.064 21:30:00 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:16:40.064 21:30:00 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:16:40.064 21:30:00 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:16:40.064 21:30:00 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:16:40.064 21:30:00 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:16:40.064 21:30:00 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:16:40.064 21:30:00 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:16:40.322 21:30:01 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:16:40.322 00:16:40.322 real 0m0.266s 00:16:40.322 user 0m0.231s 00:16:40.322 sys 0m0.025s 00:16:40.322 21:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.322 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.322 ************************************ 00:16:40.322 END TEST rpc_trace_cmd_test 00:16:40.322 ************************************ 00:16:40.322 21:30:01 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:16:40.322 21:30:01 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:16:40.322 21:30:01 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:16:40.322 21:30:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:40.322 21:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:40.322 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.322 ************************************ 00:16:40.322 START TEST rpc_daemon_integrity 00:16:40.322 ************************************ 00:16:40.322 21:30:01 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:16:40.322 21:30:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:40.322 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.322 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.322 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.322 21:30:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:16:40.322 21:30:01 -- rpc/rpc.sh@13 -- # jq length 00:16:40.322 21:30:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:16:40.322 21:30:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:16:40.322 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.322 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.322 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.322 21:30:01 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:16:40.322 21:30:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:16:40.322 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.322 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.322 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.322 21:30:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:16:40.322 { 00:16:40.322 "name": "Malloc2", 00:16:40.322 "aliases": [ 00:16:40.322 "6db35b95-a707-43a2-869c-b932ff2e497e" 00:16:40.322 ], 00:16:40.322 "product_name": "Malloc disk", 00:16:40.322 "block_size": 512, 00:16:40.322 "num_blocks": 16384, 00:16:40.322 "uuid": "6db35b95-a707-43a2-869c-b932ff2e497e", 00:16:40.322 "assigned_rate_limits": { 00:16:40.322 "rw_ios_per_sec": 0, 00:16:40.322 "rw_mbytes_per_sec": 0, 00:16:40.322 "r_mbytes_per_sec": 0, 00:16:40.322 "w_mbytes_per_sec": 0 00:16:40.322 }, 00:16:40.322 "claimed": false, 00:16:40.322 "zoned": false, 00:16:40.322 "supported_io_types": { 00:16:40.322 "read": true, 00:16:40.322 "write": true, 00:16:40.322 "unmap": true, 00:16:40.322 "write_zeroes": true, 00:16:40.322 "flush": true, 00:16:40.322 "reset": true, 00:16:40.322 "compare": false, 00:16:40.322 "compare_and_write": false, 00:16:40.322 "abort": true, 00:16:40.322 "nvme_admin": false, 00:16:40.322 "nvme_io": false 00:16:40.322 }, 00:16:40.323 "memory_domains": [ 00:16:40.323 { 00:16:40.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.323 "dma_device_type": 2 00:16:40.323 } 00:16:40.323 ], 00:16:40.323 "driver_specific": {} 00:16:40.323 } 00:16:40.323 ]' 00:16:40.323 21:30:01 -- rpc/rpc.sh@17 -- # jq length 00:16:40.323 21:30:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:16:40.323 21:30:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:16:40.323 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.323 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.323 [2024-07-11 21:30:01.248167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:16:40.323 [2024-07-11 21:30:01.248230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.323 [2024-07-11 21:30:01.248254] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x242fd20 00:16:40.323 [2024-07-11 21:30:01.248264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.323 [2024-07-11 21:30:01.249815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.323 [2024-07-11 21:30:01.249847] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:16:40.323 Passthru0 00:16:40.323 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.323 21:30:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:16:40.323 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.323 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.581 21:30:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:16:40.581 { 00:16:40.581 "name": "Malloc2", 00:16:40.581 "aliases": [ 00:16:40.581 "6db35b95-a707-43a2-869c-b932ff2e497e" 00:16:40.581 ], 00:16:40.581 "product_name": "Malloc disk", 00:16:40.581 "block_size": 512, 00:16:40.581 "num_blocks": 16384, 00:16:40.581 "uuid": "6db35b95-a707-43a2-869c-b932ff2e497e", 00:16:40.581 "assigned_rate_limits": { 00:16:40.581 "rw_ios_per_sec": 0, 00:16:40.581 "rw_mbytes_per_sec": 0, 00:16:40.581 "r_mbytes_per_sec": 0, 00:16:40.581 "w_mbytes_per_sec": 0 00:16:40.581 }, 00:16:40.582 "claimed": true, 00:16:40.582 "claim_type": "exclusive_write", 00:16:40.582 "zoned": false, 00:16:40.582 "supported_io_types": { 00:16:40.582 "read": true, 00:16:40.582 "write": true, 00:16:40.582 "unmap": true, 00:16:40.582 "write_zeroes": true, 00:16:40.582 "flush": true, 00:16:40.582 "reset": true, 00:16:40.582 "compare": false, 00:16:40.582 "compare_and_write": false, 00:16:40.582 "abort": true, 00:16:40.582 "nvme_admin": false, 00:16:40.582 "nvme_io": false 00:16:40.582 }, 00:16:40.582 "memory_domains": [ 00:16:40.582 { 00:16:40.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.582 "dma_device_type": 2 00:16:40.582 } 00:16:40.582 ], 00:16:40.582 "driver_specific": {} 00:16:40.582 }, 00:16:40.582 { 00:16:40.582 "name": "Passthru0", 00:16:40.582 "aliases": [ 00:16:40.582 "9a8e4a49-a067-5bf0-b7c0-10e4fa847981" 00:16:40.582 ], 00:16:40.582 "product_name": "passthru", 00:16:40.582 "block_size": 512, 00:16:40.582 "num_blocks": 16384, 00:16:40.582 "uuid": "9a8e4a49-a067-5bf0-b7c0-10e4fa847981", 00:16:40.582 "assigned_rate_limits": { 00:16:40.582 "rw_ios_per_sec": 0, 00:16:40.582 "rw_mbytes_per_sec": 0, 00:16:40.582 "r_mbytes_per_sec": 0, 00:16:40.582 "w_mbytes_per_sec": 0 00:16:40.582 }, 00:16:40.582 "claimed": false, 00:16:40.582 "zoned": false, 00:16:40.582 "supported_io_types": { 00:16:40.582 "read": true, 00:16:40.582 "write": true, 00:16:40.582 "unmap": true, 00:16:40.582 "write_zeroes": true, 00:16:40.582 "flush": true, 00:16:40.582 "reset": true, 00:16:40.582 "compare": false, 00:16:40.582 "compare_and_write": false, 00:16:40.582 "abort": true, 00:16:40.582 "nvme_admin": false, 00:16:40.582 "nvme_io": false 00:16:40.582 }, 00:16:40.582 "memory_domains": [ 00:16:40.582 { 00:16:40.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.582 "dma_device_type": 2 00:16:40.582 } 00:16:40.582 ], 00:16:40.582 "driver_specific": { 00:16:40.582 "passthru": { 00:16:40.582 "name": "Passthru0", 00:16:40.582 "base_bdev_name": "Malloc2" 00:16:40.582 } 00:16:40.582 } 00:16:40.582 } 00:16:40.582 ]' 00:16:40.582 21:30:01 -- rpc/rpc.sh@21 -- # jq length 00:16:40.582 21:30:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:16:40.582 21:30:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:16:40.582 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.582 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.582 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.582 21:30:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:40.582 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.582 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.582 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.582 21:30:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:40.582 21:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.582 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.582 21:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.582 21:30:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:16:40.582 21:30:01 -- rpc/rpc.sh@26 -- # jq length 00:16:40.582 21:30:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:16:40.582 00:16:40.582 real 0m0.320s 00:16:40.582 user 0m0.211s 00:16:40.582 sys 0m0.042s 00:16:40.582 21:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.582 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.582 ************************************ 00:16:40.582 END TEST rpc_daemon_integrity 00:16:40.582 ************************************ 00:16:40.582 21:30:01 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:40.582 21:30:01 -- rpc/rpc.sh@84 -- # killprocess 65948 00:16:40.582 21:30:01 -- common/autotest_common.sh@926 -- # '[' -z 65948 ']' 00:16:40.582 21:30:01 -- common/autotest_common.sh@930 -- # kill -0 65948 00:16:40.582 21:30:01 -- common/autotest_common.sh@931 -- # uname 00:16:40.582 21:30:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:40.582 21:30:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65948 00:16:40.582 21:30:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:40.582 21:30:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:40.582 killing process with pid 65948 00:16:40.582 21:30:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65948' 00:16:40.582 21:30:01 -- common/autotest_common.sh@945 -- # kill 65948 00:16:40.582 21:30:01 -- common/autotest_common.sh@950 -- # wait 65948 00:16:41.147 00:16:41.147 real 0m2.789s 00:16:41.147 user 0m3.593s 00:16:41.147 sys 0m0.683s 00:16:41.147 21:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.147 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:41.147 ************************************ 00:16:41.147 END TEST rpc 00:16:41.147 ************************************ 00:16:41.147 21:30:01 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:41.147 21:30:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:41.147 21:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:41.147 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:41.147 ************************************ 00:16:41.147 START TEST rpc_client 00:16:41.147 ************************************ 00:16:41.147 21:30:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:41.147 * Looking for test storage... 00:16:41.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:16:41.147 21:30:01 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:16:41.147 OK 00:16:41.147 21:30:02 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:16:41.147 00:16:41.147 real 0m0.106s 00:16:41.147 user 0m0.045s 00:16:41.147 sys 0m0.067s 00:16:41.147 21:30:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.147 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:41.147 ************************************ 00:16:41.147 END TEST rpc_client 00:16:41.147 ************************************ 00:16:41.147 21:30:02 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:41.147 21:30:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:41.147 21:30:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:41.147 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:41.147 ************************************ 00:16:41.147 START TEST json_config 00:16:41.147 ************************************ 00:16:41.147 21:30:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:41.405 21:30:02 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.405 21:30:02 -- nvmf/common.sh@7 -- # uname -s 00:16:41.405 21:30:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.405 21:30:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.405 21:30:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.405 21:30:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.405 21:30:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.405 21:30:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.405 21:30:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.405 21:30:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.405 21:30:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.405 21:30:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.405 21:30:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:16:41.405 21:30:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:16:41.405 21:30:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.405 21:30:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.405 21:30:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:41.405 21:30:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.405 21:30:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.405 21:30:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.405 21:30:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.405 21:30:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.405 21:30:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.405 21:30:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.405 21:30:02 -- paths/export.sh@5 -- # export PATH 00:16:41.405 21:30:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.405 21:30:02 -- nvmf/common.sh@46 -- # : 0 00:16:41.405 21:30:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:41.405 21:30:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:41.405 21:30:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:41.405 21:30:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.405 21:30:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.405 21:30:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:41.405 21:30:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:41.405 21:30:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:41.405 21:30:02 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:16:41.405 21:30:02 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:16:41.405 21:30:02 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:16:41.405 21:30:02 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:16:41.405 21:30:02 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:16:41.405 21:30:02 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:16:41.405 21:30:02 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:16:41.405 21:30:02 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:16:41.405 21:30:02 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:16:41.405 21:30:02 -- json_config/json_config.sh@32 -- # declare -A app_params 00:16:41.405 21:30:02 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:16:41.406 21:30:02 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:16:41.406 21:30:02 -- json_config/json_config.sh@43 -- # last_event_id=0 00:16:41.406 21:30:02 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:41.406 INFO: JSON configuration test init 00:16:41.406 21:30:02 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:16:41.406 21:30:02 -- json_config/json_config.sh@420 -- # json_config_test_init 00:16:41.406 21:30:02 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:16:41.406 21:30:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:41.406 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:41.406 21:30:02 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:16:41.406 21:30:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:41.406 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:41.406 21:30:02 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:16:41.406 21:30:02 -- json_config/json_config.sh@98 -- # local app=target 00:16:41.406 21:30:02 -- json_config/json_config.sh@99 -- # shift 00:16:41.406 21:30:02 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:16:41.406 21:30:02 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:16:41.406 21:30:02 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:16:41.406 21:30:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:16:41.406 21:30:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:16:41.406 21:30:02 -- json_config/json_config.sh@111 -- # app_pid[$app]=66185 00:16:41.406 Waiting for target to run... 00:16:41.406 21:30:02 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:16:41.406 21:30:02 -- json_config/json_config.sh@114 -- # waitforlisten 66185 /var/tmp/spdk_tgt.sock 00:16:41.406 21:30:02 -- common/autotest_common.sh@819 -- # '[' -z 66185 ']' 00:16:41.406 21:30:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:41.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:41.406 21:30:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:41.406 21:30:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:41.406 21:30:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:41.406 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:41.406 21:30:02 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:16:41.406 [2024-07-11 21:30:02.202504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:41.406 [2024-07-11 21:30:02.202627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66185 ] 00:16:41.971 [2024-07-11 21:30:02.621146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.971 [2024-07-11 21:30:02.691520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.971 [2024-07-11 21:30:02.691727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.536 21:30:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:42.536 21:30:03 -- common/autotest_common.sh@852 -- # return 0 00:16:42.536 00:16:42.536 21:30:03 -- json_config/json_config.sh@115 -- # echo '' 00:16:42.536 21:30:03 -- json_config/json_config.sh@322 -- # create_accel_config 00:16:42.536 21:30:03 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:16:42.536 21:30:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:42.536 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.536 21:30:03 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:16:42.536 21:30:03 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:16:42.536 21:30:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:42.536 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.536 21:30:03 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:16:42.536 21:30:03 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:16:42.536 21:30:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:16:42.804 21:30:03 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:16:42.804 21:30:03 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:16:42.804 21:30:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:42.804 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.804 21:30:03 -- json_config/json_config.sh@48 -- # local ret=0 00:16:42.804 21:30:03 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:16:42.804 21:30:03 -- json_config/json_config.sh@49 -- # local enabled_types 00:16:42.804 21:30:03 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:16:42.804 21:30:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:16:42.804 21:30:03 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:16:43.095 21:30:03 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:16:43.095 21:30:03 -- json_config/json_config.sh@51 -- # local get_types 00:16:43.095 21:30:03 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:16:43.095 21:30:03 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:16:43.095 21:30:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:43.095 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:43.095 21:30:04 -- json_config/json_config.sh@58 -- # return 0 00:16:43.095 21:30:04 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:16:43.095 21:30:04 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:16:43.095 21:30:04 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:16:43.095 21:30:04 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:16:43.095 21:30:04 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:16:43.095 21:30:04 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:16:43.095 21:30:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:43.095 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:16:43.095 21:30:04 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:16:43.095 21:30:04 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:16:43.095 21:30:04 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:16:43.095 21:30:04 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:16:43.095 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:16:43.353 MallocForNvmf0 00:16:43.610 21:30:04 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:16:43.610 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:16:43.869 MallocForNvmf1 00:16:43.869 21:30:04 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:16:43.869 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:16:44.127 [2024-07-11 21:30:04.869910] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.127 21:30:04 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.127 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.385 21:30:05 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:16:44.385 21:30:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:16:44.386 21:30:05 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:16:44.386 21:30:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:16:44.955 21:30:05 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:16:44.955 21:30:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:16:44.955 [2024-07-11 21:30:05.903163] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:45.214 21:30:05 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:16:45.214 21:30:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:45.214 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:16:45.214 21:30:05 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:16:45.214 21:30:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:45.214 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:16:45.214 21:30:06 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:16:45.214 21:30:06 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:45.214 21:30:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:45.473 MallocBdevForConfigChangeCheck 00:16:45.473 21:30:06 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:16:45.473 21:30:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:45.473 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:45.473 21:30:06 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:16:45.473 21:30:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:45.731 INFO: shutting down applications... 00:16:45.731 21:30:06 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:16:45.731 21:30:06 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:16:45.731 21:30:06 -- json_config/json_config.sh@431 -- # json_config_clear target 00:16:45.731 21:30:06 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:16:45.731 21:30:06 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:16:46.298 Calling clear_iscsi_subsystem 00:16:46.298 Calling clear_nvmf_subsystem 00:16:46.298 Calling clear_nbd_subsystem 00:16:46.298 Calling clear_ublk_subsystem 00:16:46.298 Calling clear_vhost_blk_subsystem 00:16:46.298 Calling clear_vhost_scsi_subsystem 00:16:46.298 Calling clear_scheduler_subsystem 00:16:46.298 Calling clear_bdev_subsystem 00:16:46.298 Calling clear_accel_subsystem 00:16:46.298 Calling clear_vmd_subsystem 00:16:46.298 Calling clear_sock_subsystem 00:16:46.298 Calling clear_iobuf_subsystem 00:16:46.298 21:30:06 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:16:46.298 21:30:06 -- json_config/json_config.sh@396 -- # count=100 00:16:46.298 21:30:06 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:16:46.298 21:30:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:46.298 21:30:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:16:46.298 21:30:06 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:16:46.556 21:30:07 -- json_config/json_config.sh@398 -- # break 00:16:46.556 21:30:07 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:16:46.556 21:30:07 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:16:46.556 21:30:07 -- json_config/json_config.sh@120 -- # local app=target 00:16:46.556 21:30:07 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:16:46.556 21:30:07 -- json_config/json_config.sh@124 -- # [[ -n 66185 ]] 00:16:46.556 21:30:07 -- json_config/json_config.sh@127 -- # kill -SIGINT 66185 00:16:46.556 21:30:07 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:16:46.556 21:30:07 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:16:46.556 21:30:07 -- json_config/json_config.sh@130 -- # kill -0 66185 00:16:46.556 21:30:07 -- json_config/json_config.sh@134 -- # sleep 0.5 00:16:47.123 21:30:07 -- json_config/json_config.sh@129 -- # (( i++ )) 00:16:47.123 21:30:07 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:16:47.123 21:30:07 -- json_config/json_config.sh@130 -- # kill -0 66185 00:16:47.123 21:30:07 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:16:47.123 SPDK target shutdown done 00:16:47.123 21:30:07 -- json_config/json_config.sh@132 -- # break 00:16:47.123 21:30:07 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:16:47.123 21:30:07 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:16:47.123 INFO: relaunching applications... 00:16:47.123 21:30:07 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:16:47.123 21:30:07 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:47.123 21:30:07 -- json_config/json_config.sh@98 -- # local app=target 00:16:47.123 21:30:07 -- json_config/json_config.sh@99 -- # shift 00:16:47.123 21:30:07 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:16:47.123 21:30:07 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:16:47.123 21:30:07 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:16:47.124 21:30:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:16:47.124 21:30:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:16:47.124 21:30:07 -- json_config/json_config.sh@111 -- # app_pid[$app]=66375 00:16:47.124 Waiting for target to run... 00:16:47.124 21:30:07 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:16:47.124 21:30:07 -- json_config/json_config.sh@114 -- # waitforlisten 66375 /var/tmp/spdk_tgt.sock 00:16:47.124 21:30:07 -- common/autotest_common.sh@819 -- # '[' -z 66375 ']' 00:16:47.124 21:30:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:47.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:47.124 21:30:07 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:47.124 21:30:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:47.124 21:30:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:47.124 21:30:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:47.124 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:16:47.124 [2024-07-11 21:30:07.961357] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:47.124 [2024-07-11 21:30:07.961460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66375 ] 00:16:47.691 [2024-07-11 21:30:08.385293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.691 [2024-07-11 21:30:08.456179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:47.691 [2024-07-11 21:30:08.456353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.949 [2024-07-11 21:30:08.768559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.949 [2024-07-11 21:30:08.800654] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:48.207 21:30:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.207 00:16:48.207 21:30:08 -- common/autotest_common.sh@852 -- # return 0 00:16:48.207 21:30:08 -- json_config/json_config.sh@115 -- # echo '' 00:16:48.207 21:30:08 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:16:48.207 INFO: Checking if target configuration is the same... 00:16:48.207 21:30:08 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:16:48.207 21:30:08 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:48.207 21:30:08 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:16:48.207 21:30:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:48.207 + '[' 2 -ne 2 ']' 00:16:48.207 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:16:48.207 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:16:48.207 + rootdir=/home/vagrant/spdk_repo/spdk 00:16:48.207 +++ basename /dev/fd/62 00:16:48.207 ++ mktemp /tmp/62.XXX 00:16:48.207 + tmp_file_1=/tmp/62.Cvb 00:16:48.207 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:48.207 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:48.207 + tmp_file_2=/tmp/spdk_tgt_config.json.Y74 00:16:48.207 + ret=0 00:16:48.207 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:48.465 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:48.723 + diff -u /tmp/62.Cvb /tmp/spdk_tgt_config.json.Y74 00:16:48.723 INFO: JSON config files are the same 00:16:48.723 + echo 'INFO: JSON config files are the same' 00:16:48.723 + rm /tmp/62.Cvb /tmp/spdk_tgt_config.json.Y74 00:16:48.723 + exit 0 00:16:48.723 INFO: changing configuration and checking if this can be detected... 00:16:48.723 21:30:09 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:16:48.723 21:30:09 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:16:48.723 21:30:09 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:48.723 21:30:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:48.982 21:30:09 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:48.982 21:30:09 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:16:48.982 21:30:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:48.982 + '[' 2 -ne 2 ']' 00:16:48.982 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:16:48.982 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:16:48.982 + rootdir=/home/vagrant/spdk_repo/spdk 00:16:48.982 +++ basename /dev/fd/62 00:16:48.982 ++ mktemp /tmp/62.XXX 00:16:48.982 + tmp_file_1=/tmp/62.IXX 00:16:48.982 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:48.982 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:48.982 + tmp_file_2=/tmp/spdk_tgt_config.json.uFU 00:16:48.982 + ret=0 00:16:48.982 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:49.241 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:49.241 + diff -u /tmp/62.IXX /tmp/spdk_tgt_config.json.uFU 00:16:49.241 + ret=1 00:16:49.241 + echo '=== Start of file: /tmp/62.IXX ===' 00:16:49.241 + cat /tmp/62.IXX 00:16:49.241 + echo '=== End of file: /tmp/62.IXX ===' 00:16:49.241 + echo '' 00:16:49.241 + echo '=== Start of file: /tmp/spdk_tgt_config.json.uFU ===' 00:16:49.241 + cat /tmp/spdk_tgt_config.json.uFU 00:16:49.241 + echo '=== End of file: /tmp/spdk_tgt_config.json.uFU ===' 00:16:49.241 + echo '' 00:16:49.241 + rm /tmp/62.IXX /tmp/spdk_tgt_config.json.uFU 00:16:49.241 + exit 1 00:16:49.241 INFO: configuration change detected. 00:16:49.241 21:30:10 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:16:49.241 21:30:10 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:16:49.241 21:30:10 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:16:49.241 21:30:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:49.241 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.241 21:30:10 -- json_config/json_config.sh@360 -- # local ret=0 00:16:49.241 21:30:10 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:16:49.241 21:30:10 -- json_config/json_config.sh@370 -- # [[ -n 66375 ]] 00:16:49.241 21:30:10 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:16:49.241 21:30:10 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:16:49.241 21:30:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:49.241 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.241 21:30:10 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:16:49.499 21:30:10 -- json_config/json_config.sh@246 -- # uname -s 00:16:49.499 21:30:10 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:16:49.499 21:30:10 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:16:49.499 21:30:10 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:16:49.499 21:30:10 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:16:49.499 21:30:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:49.499 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.499 21:30:10 -- json_config/json_config.sh@376 -- # killprocess 66375 00:16:49.499 21:30:10 -- common/autotest_common.sh@926 -- # '[' -z 66375 ']' 00:16:49.499 21:30:10 -- common/autotest_common.sh@930 -- # kill -0 66375 00:16:49.499 21:30:10 -- common/autotest_common.sh@931 -- # uname 00:16:49.499 21:30:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:49.499 21:30:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66375 00:16:49.499 21:30:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:49.499 21:30:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:49.499 21:30:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66375' 00:16:49.499 killing process with pid 66375 00:16:49.499 21:30:10 -- common/autotest_common.sh@945 -- # kill 66375 00:16:49.499 21:30:10 -- common/autotest_common.sh@950 -- # wait 66375 00:16:49.758 21:30:10 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:49.758 21:30:10 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:16:49.758 21:30:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:49.758 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.758 INFO: Success 00:16:49.758 21:30:10 -- json_config/json_config.sh@381 -- # return 0 00:16:49.758 21:30:10 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:16:49.758 ************************************ 00:16:49.758 END TEST json_config 00:16:49.758 ************************************ 00:16:49.758 00:16:49.758 real 0m8.465s 00:16:49.758 user 0m12.201s 00:16:49.758 sys 0m1.781s 00:16:49.758 21:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.758 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.758 21:30:10 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:49.758 21:30:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:49.758 21:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:49.758 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:16:49.758 ************************************ 00:16:49.758 START TEST json_config_extra_key 00:16:49.758 ************************************ 00:16:49.758 21:30:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:49.758 21:30:10 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.758 21:30:10 -- nvmf/common.sh@7 -- # uname -s 00:16:49.758 21:30:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.758 21:30:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.758 21:30:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.758 21:30:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.758 21:30:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.758 21:30:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.758 21:30:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.758 21:30:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.758 21:30:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.758 21:30:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.758 21:30:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:16:49.758 21:30:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:16:49.758 21:30:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.758 21:30:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.758 21:30:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:49.758 21:30:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.758 21:30:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.758 21:30:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.758 21:30:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.758 21:30:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.758 21:30:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.758 21:30:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.758 21:30:10 -- paths/export.sh@5 -- # export PATH 00:16:49.758 21:30:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.758 21:30:10 -- nvmf/common.sh@46 -- # : 0 00:16:49.758 21:30:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.758 21:30:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.758 21:30:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.758 21:30:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.758 21:30:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.758 21:30:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.758 21:30:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.758 21:30:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.758 21:30:10 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:16:49.758 21:30:10 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:16:49.758 21:30:10 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:16:49.758 21:30:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:16:49.758 21:30:10 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:16:49.759 INFO: launching applications... 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@25 -- # shift 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66515 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:49.759 Waiting for target to run... 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:16:49.759 21:30:10 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66515 /var/tmp/spdk_tgt.sock 00:16:49.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:49.759 21:30:10 -- common/autotest_common.sh@819 -- # '[' -z 66515 ']' 00:16:49.759 21:30:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:49.759 21:30:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:49.759 21:30:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:49.759 21:30:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:49.759 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 [2024-07-11 21:30:10.721313] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:50.017 [2024-07-11 21:30:10.721424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66515 ] 00:16:50.276 [2024-07-11 21:30:11.156133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.534 [2024-07-11 21:30:11.227993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.534 [2024-07-11 21:30:11.228212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.100 00:16:51.100 INFO: shutting down applications... 00:16:51.100 21:30:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:51.100 21:30:11 -- common/autotest_common.sh@852 -- # return 0 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66515 ]] 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66515 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66515 00:16:51.100 21:30:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66515 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@52 -- # break 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:16:51.359 SPDK target shutdown done 00:16:51.359 21:30:12 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:16:51.359 Success 00:16:51.359 00:16:51.359 real 0m1.679s 00:16:51.359 user 0m1.598s 00:16:51.359 sys 0m0.451s 00:16:51.359 ************************************ 00:16:51.359 END TEST json_config_extra_key 00:16:51.359 ************************************ 00:16:51.359 21:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.359 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:51.359 21:30:12 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:51.359 21:30:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:51.359 21:30:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.359 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:51.700 ************************************ 00:16:51.700 START TEST alias_rpc 00:16:51.700 ************************************ 00:16:51.700 21:30:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:51.700 * Looking for test storage... 00:16:51.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:16:51.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.700 21:30:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:51.700 21:30:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66584 00:16:51.700 21:30:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66584 00:16:51.700 21:30:12 -- common/autotest_common.sh@819 -- # '[' -z 66584 ']' 00:16:51.700 21:30:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.700 21:30:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.700 21:30:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.700 21:30:12 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.700 21:30:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.700 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:51.700 [2024-07-11 21:30:12.461132] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:51.700 [2024-07-11 21:30:12.461255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66584 ] 00:16:51.700 [2024-07-11 21:30:12.602910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.958 [2024-07-11 21:30:12.701657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.959 [2024-07-11 21:30:12.701836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.525 21:30:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.525 21:30:13 -- common/autotest_common.sh@852 -- # return 0 00:16:52.525 21:30:13 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:16:52.782 21:30:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66584 00:16:52.782 21:30:13 -- common/autotest_common.sh@926 -- # '[' -z 66584 ']' 00:16:52.782 21:30:13 -- common/autotest_common.sh@930 -- # kill -0 66584 00:16:52.782 21:30:13 -- common/autotest_common.sh@931 -- # uname 00:16:52.782 21:30:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.782 21:30:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66584 00:16:53.040 killing process with pid 66584 00:16:53.040 21:30:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:53.040 21:30:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:53.040 21:30:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66584' 00:16:53.040 21:30:13 -- common/autotest_common.sh@945 -- # kill 66584 00:16:53.040 21:30:13 -- common/autotest_common.sh@950 -- # wait 66584 00:16:53.299 ************************************ 00:16:53.299 END TEST alias_rpc 00:16:53.299 ************************************ 00:16:53.299 00:16:53.299 real 0m1.848s 00:16:53.299 user 0m2.066s 00:16:53.299 sys 0m0.473s 00:16:53.299 21:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.299 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:53.299 21:30:14 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:16:53.299 21:30:14 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:53.299 21:30:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:53.299 21:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:53.299 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:53.299 ************************************ 00:16:53.299 START TEST spdkcli_tcp 00:16:53.299 ************************************ 00:16:53.299 21:30:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:53.557 * Looking for test storage... 00:16:53.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:53.557 21:30:14 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:53.557 21:30:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:53.557 21:30:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:53.557 21:30:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:53.557 21:30:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:53.558 21:30:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.558 21:30:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:53.558 21:30:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:53.558 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:53.558 21:30:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66659 00:16:53.558 21:30:14 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:53.558 21:30:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 66659 00:16:53.558 21:30:14 -- common/autotest_common.sh@819 -- # '[' -z 66659 ']' 00:16:53.558 21:30:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.558 21:30:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.558 21:30:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.558 21:30:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.558 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:16:53.558 [2024-07-11 21:30:14.365965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:53.558 [2024-07-11 21:30:14.366085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66659 ] 00:16:53.815 [2024-07-11 21:30:14.506907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:53.815 [2024-07-11 21:30:14.609743] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:53.815 [2024-07-11 21:30:14.610372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.815 [2024-07-11 21:30:14.610387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.749 21:30:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.749 21:30:15 -- common/autotest_common.sh@852 -- # return 0 00:16:54.749 21:30:15 -- spdkcli/tcp.sh@31 -- # socat_pid=66676 00:16:54.749 21:30:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:54.749 21:30:15 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:54.749 [ 00:16:54.749 "bdev_malloc_delete", 00:16:54.749 "bdev_malloc_create", 00:16:54.749 "bdev_null_resize", 00:16:54.749 "bdev_null_delete", 00:16:54.749 "bdev_null_create", 00:16:54.749 "bdev_nvme_cuse_unregister", 00:16:54.749 "bdev_nvme_cuse_register", 00:16:54.749 "bdev_opal_new_user", 00:16:54.749 "bdev_opal_set_lock_state", 00:16:54.749 "bdev_opal_delete", 00:16:54.749 "bdev_opal_get_info", 00:16:54.749 "bdev_opal_create", 00:16:54.749 "bdev_nvme_opal_revert", 00:16:54.749 "bdev_nvme_opal_init", 00:16:54.749 "bdev_nvme_send_cmd", 00:16:54.749 "bdev_nvme_get_path_iostat", 00:16:54.749 "bdev_nvme_get_mdns_discovery_info", 00:16:54.749 "bdev_nvme_stop_mdns_discovery", 00:16:54.749 "bdev_nvme_start_mdns_discovery", 00:16:54.749 "bdev_nvme_set_multipath_policy", 00:16:54.749 "bdev_nvme_set_preferred_path", 00:16:54.749 "bdev_nvme_get_io_paths", 00:16:54.749 "bdev_nvme_remove_error_injection", 00:16:54.749 "bdev_nvme_add_error_injection", 00:16:54.749 "bdev_nvme_get_discovery_info", 00:16:54.749 "bdev_nvme_stop_discovery", 00:16:54.749 "bdev_nvme_start_discovery", 00:16:54.749 "bdev_nvme_get_controller_health_info", 00:16:54.749 "bdev_nvme_disable_controller", 00:16:54.749 "bdev_nvme_enable_controller", 00:16:54.749 "bdev_nvme_reset_controller", 00:16:54.749 "bdev_nvme_get_transport_statistics", 00:16:54.749 "bdev_nvme_apply_firmware", 00:16:54.749 "bdev_nvme_detach_controller", 00:16:54.749 "bdev_nvme_get_controllers", 00:16:54.749 "bdev_nvme_attach_controller", 00:16:54.749 "bdev_nvme_set_hotplug", 00:16:54.749 "bdev_nvme_set_options", 00:16:54.749 "bdev_passthru_delete", 00:16:54.749 "bdev_passthru_create", 00:16:54.749 "bdev_lvol_grow_lvstore", 00:16:54.749 "bdev_lvol_get_lvols", 00:16:54.749 "bdev_lvol_get_lvstores", 00:16:54.749 "bdev_lvol_delete", 00:16:54.749 "bdev_lvol_set_read_only", 00:16:54.749 "bdev_lvol_resize", 00:16:54.749 "bdev_lvol_decouple_parent", 00:16:54.749 "bdev_lvol_inflate", 00:16:54.749 "bdev_lvol_rename", 00:16:54.749 "bdev_lvol_clone_bdev", 00:16:54.749 "bdev_lvol_clone", 00:16:54.749 "bdev_lvol_snapshot", 00:16:54.749 "bdev_lvol_create", 00:16:54.749 "bdev_lvol_delete_lvstore", 00:16:54.749 "bdev_lvol_rename_lvstore", 00:16:54.749 "bdev_lvol_create_lvstore", 00:16:54.749 "bdev_raid_set_options", 00:16:54.749 "bdev_raid_remove_base_bdev", 00:16:54.749 "bdev_raid_add_base_bdev", 00:16:54.749 "bdev_raid_delete", 00:16:54.749 "bdev_raid_create", 00:16:54.749 "bdev_raid_get_bdevs", 00:16:54.749 "bdev_error_inject_error", 00:16:54.749 "bdev_error_delete", 00:16:54.749 "bdev_error_create", 00:16:54.749 "bdev_split_delete", 00:16:54.749 "bdev_split_create", 00:16:54.750 "bdev_delay_delete", 00:16:54.750 "bdev_delay_create", 00:16:54.750 "bdev_delay_update_latency", 00:16:54.750 "bdev_zone_block_delete", 00:16:54.750 "bdev_zone_block_create", 00:16:54.750 "blobfs_create", 00:16:54.750 "blobfs_detect", 00:16:54.750 "blobfs_set_cache_size", 00:16:54.750 "bdev_aio_delete", 00:16:54.750 "bdev_aio_rescan", 00:16:54.750 "bdev_aio_create", 00:16:54.750 "bdev_ftl_set_property", 00:16:54.750 "bdev_ftl_get_properties", 00:16:54.750 "bdev_ftl_get_stats", 00:16:54.750 "bdev_ftl_unmap", 00:16:54.750 "bdev_ftl_unload", 00:16:54.750 "bdev_ftl_delete", 00:16:54.750 "bdev_ftl_load", 00:16:54.750 "bdev_ftl_create", 00:16:54.750 "bdev_virtio_attach_controller", 00:16:54.750 "bdev_virtio_scsi_get_devices", 00:16:54.750 "bdev_virtio_detach_controller", 00:16:54.750 "bdev_virtio_blk_set_hotplug", 00:16:54.750 "bdev_iscsi_delete", 00:16:54.750 "bdev_iscsi_create", 00:16:54.750 "bdev_iscsi_set_options", 00:16:54.750 "bdev_uring_delete", 00:16:54.750 "bdev_uring_create", 00:16:54.750 "accel_error_inject_error", 00:16:54.750 "ioat_scan_accel_module", 00:16:54.750 "dsa_scan_accel_module", 00:16:54.750 "iaa_scan_accel_module", 00:16:54.750 "iscsi_set_options", 00:16:54.750 "iscsi_get_auth_groups", 00:16:54.750 "iscsi_auth_group_remove_secret", 00:16:54.750 "iscsi_auth_group_add_secret", 00:16:54.750 "iscsi_delete_auth_group", 00:16:54.750 "iscsi_create_auth_group", 00:16:54.750 "iscsi_set_discovery_auth", 00:16:54.750 "iscsi_get_options", 00:16:54.750 "iscsi_target_node_request_logout", 00:16:54.750 "iscsi_target_node_set_redirect", 00:16:54.750 "iscsi_target_node_set_auth", 00:16:54.750 "iscsi_target_node_add_lun", 00:16:54.750 "iscsi_get_connections", 00:16:54.750 "iscsi_portal_group_set_auth", 00:16:54.750 "iscsi_start_portal_group", 00:16:54.750 "iscsi_delete_portal_group", 00:16:54.750 "iscsi_create_portal_group", 00:16:54.750 "iscsi_get_portal_groups", 00:16:54.750 "iscsi_delete_target_node", 00:16:54.750 "iscsi_target_node_remove_pg_ig_maps", 00:16:54.750 "iscsi_target_node_add_pg_ig_maps", 00:16:54.750 "iscsi_create_target_node", 00:16:54.750 "iscsi_get_target_nodes", 00:16:54.750 "iscsi_delete_initiator_group", 00:16:54.750 "iscsi_initiator_group_remove_initiators", 00:16:54.750 "iscsi_initiator_group_add_initiators", 00:16:54.750 "iscsi_create_initiator_group", 00:16:54.750 "iscsi_get_initiator_groups", 00:16:54.750 "nvmf_set_crdt", 00:16:54.750 "nvmf_set_config", 00:16:54.750 "nvmf_set_max_subsystems", 00:16:54.750 "nvmf_subsystem_get_listeners", 00:16:54.750 "nvmf_subsystem_get_qpairs", 00:16:54.750 "nvmf_subsystem_get_controllers", 00:16:54.750 "nvmf_get_stats", 00:16:54.750 "nvmf_get_transports", 00:16:54.750 "nvmf_create_transport", 00:16:54.750 "nvmf_get_targets", 00:16:54.750 "nvmf_delete_target", 00:16:54.750 "nvmf_create_target", 00:16:54.750 "nvmf_subsystem_allow_any_host", 00:16:54.750 "nvmf_subsystem_remove_host", 00:16:54.750 "nvmf_subsystem_add_host", 00:16:54.750 "nvmf_subsystem_remove_ns", 00:16:54.750 "nvmf_subsystem_add_ns", 00:16:54.750 "nvmf_subsystem_listener_set_ana_state", 00:16:54.750 "nvmf_discovery_get_referrals", 00:16:54.750 "nvmf_discovery_remove_referral", 00:16:54.750 "nvmf_discovery_add_referral", 00:16:54.750 "nvmf_subsystem_remove_listener", 00:16:54.750 "nvmf_subsystem_add_listener", 00:16:54.750 "nvmf_delete_subsystem", 00:16:54.750 "nvmf_create_subsystem", 00:16:54.750 "nvmf_get_subsystems", 00:16:54.750 "env_dpdk_get_mem_stats", 00:16:54.750 "nbd_get_disks", 00:16:54.750 "nbd_stop_disk", 00:16:54.750 "nbd_start_disk", 00:16:54.750 "ublk_recover_disk", 00:16:54.750 "ublk_get_disks", 00:16:54.750 "ublk_stop_disk", 00:16:54.750 "ublk_start_disk", 00:16:54.750 "ublk_destroy_target", 00:16:54.750 "ublk_create_target", 00:16:54.750 "virtio_blk_create_transport", 00:16:54.750 "virtio_blk_get_transports", 00:16:54.750 "vhost_controller_set_coalescing", 00:16:54.750 "vhost_get_controllers", 00:16:54.750 "vhost_delete_controller", 00:16:54.750 "vhost_create_blk_controller", 00:16:54.750 "vhost_scsi_controller_remove_target", 00:16:54.750 "vhost_scsi_controller_add_target", 00:16:54.750 "vhost_start_scsi_controller", 00:16:54.750 "vhost_create_scsi_controller", 00:16:54.750 "thread_set_cpumask", 00:16:54.750 "framework_get_scheduler", 00:16:54.750 "framework_set_scheduler", 00:16:54.750 "framework_get_reactors", 00:16:54.750 "thread_get_io_channels", 00:16:54.750 "thread_get_pollers", 00:16:54.750 "thread_get_stats", 00:16:54.750 "framework_monitor_context_switch", 00:16:54.750 "spdk_kill_instance", 00:16:54.750 "log_enable_timestamps", 00:16:54.750 "log_get_flags", 00:16:54.750 "log_clear_flag", 00:16:54.750 "log_set_flag", 00:16:54.750 "log_get_level", 00:16:54.750 "log_set_level", 00:16:54.750 "log_get_print_level", 00:16:54.750 "log_set_print_level", 00:16:54.750 "framework_enable_cpumask_locks", 00:16:54.750 "framework_disable_cpumask_locks", 00:16:54.750 "framework_wait_init", 00:16:54.750 "framework_start_init", 00:16:54.750 "scsi_get_devices", 00:16:54.750 "bdev_get_histogram", 00:16:54.750 "bdev_enable_histogram", 00:16:54.750 "bdev_set_qos_limit", 00:16:54.750 "bdev_set_qd_sampling_period", 00:16:54.750 "bdev_get_bdevs", 00:16:54.750 "bdev_reset_iostat", 00:16:54.750 "bdev_get_iostat", 00:16:54.750 "bdev_examine", 00:16:54.750 "bdev_wait_for_examine", 00:16:54.750 "bdev_set_options", 00:16:54.750 "notify_get_notifications", 00:16:54.750 "notify_get_types", 00:16:54.750 "accel_get_stats", 00:16:54.750 "accel_set_options", 00:16:54.750 "accel_set_driver", 00:16:54.750 "accel_crypto_key_destroy", 00:16:54.750 "accel_crypto_keys_get", 00:16:54.750 "accel_crypto_key_create", 00:16:54.750 "accel_assign_opc", 00:16:54.750 "accel_get_module_info", 00:16:54.750 "accel_get_opc_assignments", 00:16:54.750 "vmd_rescan", 00:16:54.750 "vmd_remove_device", 00:16:54.750 "vmd_enable", 00:16:54.750 "sock_set_default_impl", 00:16:54.750 "sock_impl_set_options", 00:16:54.750 "sock_impl_get_options", 00:16:54.750 "iobuf_get_stats", 00:16:54.750 "iobuf_set_options", 00:16:54.750 "framework_get_pci_devices", 00:16:54.750 "framework_get_config", 00:16:54.750 "framework_get_subsystems", 00:16:54.750 "trace_get_info", 00:16:54.750 "trace_get_tpoint_group_mask", 00:16:54.750 "trace_disable_tpoint_group", 00:16:54.750 "trace_enable_tpoint_group", 00:16:54.750 "trace_clear_tpoint_mask", 00:16:54.750 "trace_set_tpoint_mask", 00:16:54.750 "spdk_get_version", 00:16:54.750 "rpc_get_methods" 00:16:54.750 ] 00:16:54.750 21:30:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:54.750 21:30:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:54.750 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:16:54.750 21:30:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:54.750 21:30:15 -- spdkcli/tcp.sh@38 -- # killprocess 66659 00:16:54.750 21:30:15 -- common/autotest_common.sh@926 -- # '[' -z 66659 ']' 00:16:54.750 21:30:15 -- common/autotest_common.sh@930 -- # kill -0 66659 00:16:54.750 21:30:15 -- common/autotest_common.sh@931 -- # uname 00:16:54.750 21:30:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:54.750 21:30:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66659 00:16:54.750 killing process with pid 66659 00:16:54.750 21:30:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:54.750 21:30:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:54.750 21:30:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66659' 00:16:54.750 21:30:15 -- common/autotest_common.sh@945 -- # kill 66659 00:16:54.750 21:30:15 -- common/autotest_common.sh@950 -- # wait 66659 00:16:55.316 ************************************ 00:16:55.316 END TEST spdkcli_tcp 00:16:55.316 ************************************ 00:16:55.316 00:16:55.316 real 0m1.842s 00:16:55.316 user 0m3.463s 00:16:55.316 sys 0m0.489s 00:16:55.316 21:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.316 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:16:55.316 21:30:16 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:55.316 21:30:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:55.316 21:30:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:55.316 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:16:55.316 ************************************ 00:16:55.316 START TEST dpdk_mem_utility 00:16:55.316 ************************************ 00:16:55.316 21:30:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:55.316 * Looking for test storage... 00:16:55.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:16:55.316 21:30:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:55.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.316 21:30:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66744 00:16:55.316 21:30:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.316 21:30:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66744 00:16:55.316 21:30:16 -- common/autotest_common.sh@819 -- # '[' -z 66744 ']' 00:16:55.316 21:30:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.316 21:30:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.316 21:30:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.316 21:30:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.316 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:16:55.316 [2024-07-11 21:30:16.228919] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:55.316 [2024-07-11 21:30:16.229037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66744 ] 00:16:55.572 [2024-07-11 21:30:16.367504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.572 [2024-07-11 21:30:16.461348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.572 [2024-07-11 21:30:16.461560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.503 21:30:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.503 21:30:17 -- common/autotest_common.sh@852 -- # return 0 00:16:56.504 21:30:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:56.504 21:30:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:56.504 21:30:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.504 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:56.504 { 00:16:56.504 "filename": "/tmp/spdk_mem_dump.txt" 00:16:56.504 } 00:16:56.504 21:30:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.504 21:30:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:56.504 DPDK memory size 814.000000 MiB in 1 heap(s) 00:16:56.504 1 heaps totaling size 814.000000 MiB 00:16:56.504 size: 814.000000 MiB heap id: 0 00:16:56.504 end heaps---------- 00:16:56.504 8 mempools totaling size 598.116089 MiB 00:16:56.504 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:56.504 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:56.504 size: 84.521057 MiB name: bdev_io_66744 00:16:56.504 size: 51.011292 MiB name: evtpool_66744 00:16:56.504 size: 50.003479 MiB name: msgpool_66744 00:16:56.504 size: 21.763794 MiB name: PDU_Pool 00:16:56.504 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:56.504 size: 0.026123 MiB name: Session_Pool 00:16:56.504 end mempools------- 00:16:56.504 6 memzones totaling size 4.142822 MiB 00:16:56.504 size: 1.000366 MiB name: RG_ring_0_66744 00:16:56.504 size: 1.000366 MiB name: RG_ring_1_66744 00:16:56.504 size: 1.000366 MiB name: RG_ring_4_66744 00:16:56.504 size: 1.000366 MiB name: RG_ring_5_66744 00:16:56.504 size: 0.125366 MiB name: RG_ring_2_66744 00:16:56.504 size: 0.015991 MiB name: RG_ring_3_66744 00:16:56.504 end memzones------- 00:16:56.504 21:30:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:56.504 heap id: 0 total size: 814.000000 MiB number of busy elements: 301 number of free elements: 15 00:16:56.504 list of free elements. size: 12.471741 MiB 00:16:56.504 element at address: 0x200000400000 with size: 1.999512 MiB 00:16:56.504 element at address: 0x200018e00000 with size: 0.999878 MiB 00:16:56.504 element at address: 0x200019000000 with size: 0.999878 MiB 00:16:56.504 element at address: 0x200003e00000 with size: 0.996277 MiB 00:16:56.504 element at address: 0x200031c00000 with size: 0.994446 MiB 00:16:56.504 element at address: 0x200013800000 with size: 0.978699 MiB 00:16:56.504 element at address: 0x200007000000 with size: 0.959839 MiB 00:16:56.504 element at address: 0x200019200000 with size: 0.936584 MiB 00:16:56.504 element at address: 0x200000200000 with size: 0.832825 MiB 00:16:56.504 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:16:56.504 element at address: 0x20000b200000 with size: 0.488892 MiB 00:16:56.504 element at address: 0x200000800000 with size: 0.486328 MiB 00:16:56.504 element at address: 0x200019400000 with size: 0.485657 MiB 00:16:56.504 element at address: 0x200027e00000 with size: 0.396667 MiB 00:16:56.504 element at address: 0x200003a00000 with size: 0.347839 MiB 00:16:56.504 list of standard malloc elements. size: 199.265686 MiB 00:16:56.504 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:16:56.504 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:16:56.504 element at address: 0x200018efff80 with size: 1.000122 MiB 00:16:56.504 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:16:56.504 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:16:56.504 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:16:56.504 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:16:56.504 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:16:56.504 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:16:56.504 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:16:56.504 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087c800 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087c980 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59180 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59240 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59300 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59480 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59540 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59600 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59780 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59840 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59900 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003adb300 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003adb500 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003affa80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003affb40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:16:56.505 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:16:56.506 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e658c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e65980 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6c580 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:16:56.506 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:16:56.507 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:16:56.507 list of memzone associated elements. size: 602.262573 MiB 00:16:56.507 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:16:56.507 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:56.507 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:16:56.507 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:56.507 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:16:56.507 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66744_0 00:16:56.507 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:16:56.507 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66744_0 00:16:56.507 element at address: 0x200003fff380 with size: 48.003052 MiB 00:16:56.507 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66744_0 00:16:56.507 element at address: 0x2000195be940 with size: 20.255554 MiB 00:16:56.507 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:56.507 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:16:56.507 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:56.507 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:16:56.507 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66744 00:16:56.507 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:16:56.507 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66744 00:16:56.507 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:16:56.507 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66744 00:16:56.507 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:16:56.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:56.507 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:16:56.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:56.507 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:16:56.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:56.507 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:16:56.507 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:56.507 element at address: 0x200003eff180 with size: 1.000488 MiB 00:16:56.507 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66744 00:16:56.507 element at address: 0x200003affc00 with size: 1.000488 MiB 00:16:56.507 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66744 00:16:56.507 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:16:56.507 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66744 00:16:56.507 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:16:56.507 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66744 00:16:56.507 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:16:56.507 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66744 00:16:56.507 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:16:56.507 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:56.507 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:16:56.507 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:56.507 element at address: 0x20001947c540 with size: 0.250488 MiB 00:16:56.507 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:56.507 element at address: 0x200003adf880 with size: 0.125488 MiB 00:16:56.507 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66744 00:16:56.507 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:16:56.507 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:56.507 element at address: 0x200027e65a40 with size: 0.023743 MiB 00:16:56.507 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:56.507 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:16:56.507 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66744 00:16:56.507 element at address: 0x200027e6bb80 with size: 0.002441 MiB 00:16:56.507 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:56.507 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:16:56.507 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66744 00:16:56.507 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:16:56.508 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66744 00:16:56.508 element at address: 0x200027e6c640 with size: 0.000305 MiB 00:16:56.508 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:56.508 21:30:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:56.508 21:30:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66744 00:16:56.508 21:30:17 -- common/autotest_common.sh@926 -- # '[' -z 66744 ']' 00:16:56.508 21:30:17 -- common/autotest_common.sh@930 -- # kill -0 66744 00:16:56.508 21:30:17 -- common/autotest_common.sh@931 -- # uname 00:16:56.508 21:30:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.508 21:30:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66744 00:16:56.508 21:30:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:56.508 killing process with pid 66744 00:16:56.508 21:30:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:56.508 21:30:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66744' 00:16:56.508 21:30:17 -- common/autotest_common.sh@945 -- # kill 66744 00:16:56.508 21:30:17 -- common/autotest_common.sh@950 -- # wait 66744 00:16:56.831 00:16:56.831 real 0m1.617s 00:16:56.831 user 0m1.695s 00:16:56.831 sys 0m0.441s 00:16:56.831 21:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.831 ************************************ 00:16:56.831 END TEST dpdk_mem_utility 00:16:56.831 ************************************ 00:16:56.831 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:56.831 21:30:17 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:56.831 21:30:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:56.831 21:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:56.831 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:56.831 ************************************ 00:16:56.831 START TEST event 00:16:56.831 ************************************ 00:16:56.831 21:30:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:57.090 * Looking for test storage... 00:16:57.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:57.090 21:30:17 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:57.090 21:30:17 -- bdev/nbd_common.sh@6 -- # set -e 00:16:57.090 21:30:17 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:57.090 21:30:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:16:57.090 21:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:57.090 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:57.090 ************************************ 00:16:57.090 START TEST event_perf 00:16:57.090 ************************************ 00:16:57.090 21:30:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:57.090 Running I/O for 1 seconds...[2024-07-11 21:30:17.872734] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:57.090 [2024-07-11 21:30:17.872806] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66814 ] 00:16:57.091 [2024-07-11 21:30:18.005401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.347 [2024-07-11 21:30:18.096462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.347 [2024-07-11 21:30:18.096750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.347 [2024-07-11 21:30:18.096608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.347 [2024-07-11 21:30:18.096753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.279 Running I/O for 1 seconds... 00:16:58.279 lcore 0: 192708 00:16:58.279 lcore 1: 192708 00:16:58.279 lcore 2: 192707 00:16:58.279 lcore 3: 192707 00:16:58.279 done. 00:16:58.279 00:16:58.279 real 0m1.311s 00:16:58.279 user 0m4.130s 00:16:58.279 sys 0m0.064s 00:16:58.279 21:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.279 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:58.279 ************************************ 00:16:58.279 END TEST event_perf 00:16:58.279 ************************************ 00:16:58.279 21:30:19 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:58.279 21:30:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:58.279 21:30:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:58.279 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:16:58.279 ************************************ 00:16:58.279 START TEST event_reactor 00:16:58.279 ************************************ 00:16:58.279 21:30:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:58.537 [2024-07-11 21:30:19.237625] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:58.537 [2024-07-11 21:30:19.237733] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66853 ] 00:16:58.537 [2024-07-11 21:30:19.373157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.537 [2024-07-11 21:30:19.450716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.910 test_start 00:16:59.910 oneshot 00:16:59.910 tick 100 00:16:59.910 tick 100 00:16:59.910 tick 250 00:16:59.910 tick 100 00:16:59.910 tick 100 00:16:59.910 tick 250 00:16:59.910 tick 500 00:16:59.910 tick 100 00:16:59.910 tick 100 00:16:59.910 tick 100 00:16:59.910 tick 250 00:16:59.910 tick 100 00:16:59.910 tick 100 00:16:59.910 test_end 00:16:59.910 ************************************ 00:16:59.910 END TEST event_reactor 00:16:59.910 ************************************ 00:16:59.910 00:16:59.910 real 0m1.321s 00:16:59.910 user 0m1.159s 00:16:59.910 sys 0m0.055s 00:16:59.910 21:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.910 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:16:59.910 21:30:20 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:59.910 21:30:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:59.910 21:30:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:59.910 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:16:59.910 ************************************ 00:16:59.910 START TEST event_reactor_perf 00:16:59.910 ************************************ 00:16:59.910 21:30:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:59.910 [2024-07-11 21:30:20.616537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:59.910 [2024-07-11 21:30:20.616646] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66888 ] 00:16:59.910 [2024-07-11 21:30:20.755924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.910 [2024-07-11 21:30:20.853708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.279 test_start 00:17:01.279 test_end 00:17:01.279 Performance: 366916 events per second 00:17:01.279 00:17:01.279 real 0m1.332s 00:17:01.279 user 0m1.170s 00:17:01.279 sys 0m0.055s 00:17:01.279 21:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.279 ************************************ 00:17:01.279 END TEST event_reactor_perf 00:17:01.279 ************************************ 00:17:01.279 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.279 21:30:21 -- event/event.sh@49 -- # uname -s 00:17:01.279 21:30:21 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:17:01.279 21:30:21 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:01.279 21:30:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:01.279 21:30:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.279 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:17:01.279 ************************************ 00:17:01.279 START TEST event_scheduler 00:17:01.279 ************************************ 00:17:01.279 21:30:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:01.279 * Looking for test storage... 00:17:01.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:17:01.279 21:30:22 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:17:01.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.279 21:30:22 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66943 00:17:01.279 21:30:22 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:17:01.279 21:30:22 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:17:01.279 21:30:22 -- scheduler/scheduler.sh@37 -- # waitforlisten 66943 00:17:01.279 21:30:22 -- common/autotest_common.sh@819 -- # '[' -z 66943 ']' 00:17:01.279 21:30:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.279 21:30:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.279 21:30:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.279 21:30:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.279 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.279 [2024-07-11 21:30:22.121720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:01.279 [2024-07-11 21:30:22.122736] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66943 ] 00:17:01.536 [2024-07-11 21:30:22.266941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.536 [2024-07-11 21:30:22.377203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.536 [2024-07-11 21:30:22.377367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.536 [2024-07-11 21:30:22.377519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.536 [2024-07-11 21:30:22.377520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.469 21:30:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.469 21:30:23 -- common/autotest_common.sh@852 -- # return 0 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 POWER: Env isn't set yet! 00:17:02.469 POWER: Attempting to initialise ACPI cpufreq power management... 00:17:02.469 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:02.469 POWER: Cannot set governor of lcore 0 to userspace 00:17:02.469 POWER: Attempting to initialise PSTAT power management... 00:17:02.469 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:02.469 POWER: Cannot set governor of lcore 0 to performance 00:17:02.469 POWER: Attempting to initialise AMD PSTATE power management... 00:17:02.469 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:02.469 POWER: Cannot set governor of lcore 0 to userspace 00:17:02.469 POWER: Attempting to initialise CPPC power management... 00:17:02.469 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:02.469 POWER: Cannot set governor of lcore 0 to userspace 00:17:02.469 POWER: Attempting to initialise VM power management... 00:17:02.469 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:17:02.469 POWER: Unable to set Power Management Environment for lcore 0 00:17:02.469 [2024-07-11 21:30:23.115541] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:17:02.469 [2024-07-11 21:30:23.115556] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:17:02.469 [2024-07-11 21:30:23.115565] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:17:02.469 [2024-07-11 21:30:23.115578] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:17:02.469 [2024-07-11 21:30:23.115586] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:17:02.469 [2024-07-11 21:30:23.115593] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 [2024-07-11 21:30:23.211871] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:17:02.469 21:30:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:02.469 21:30:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 ************************************ 00:17:02.469 START TEST scheduler_create_thread 00:17:02.469 ************************************ 00:17:02.469 21:30:23 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 2 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 3 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 4 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 5 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 6 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 7 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 8 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 9 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 10 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 21:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.469 21:30:23 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:17:02.469 21:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.469 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:17:03.861 21:30:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.861 21:30:24 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:17:03.861 21:30:24 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:17:03.861 21:30:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.861 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.235 ************************************ 00:17:05.235 END TEST scheduler_create_thread 00:17:05.235 ************************************ 00:17:05.235 21:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.235 00:17:05.235 real 0m2.614s 00:17:05.235 user 0m0.021s 00:17:05.235 sys 0m0.004s 00:17:05.235 21:30:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.235 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:17:05.235 21:30:25 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:05.235 21:30:25 -- scheduler/scheduler.sh@46 -- # killprocess 66943 00:17:05.235 21:30:25 -- common/autotest_common.sh@926 -- # '[' -z 66943 ']' 00:17:05.235 21:30:25 -- common/autotest_common.sh@930 -- # kill -0 66943 00:17:05.235 21:30:25 -- common/autotest_common.sh@931 -- # uname 00:17:05.235 21:30:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:05.235 21:30:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66943 00:17:05.235 killing process with pid 66943 00:17:05.235 21:30:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:05.235 21:30:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:05.235 21:30:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66943' 00:17:05.235 21:30:25 -- common/autotest_common.sh@945 -- # kill 66943 00:17:05.235 21:30:25 -- common/autotest_common.sh@950 -- # wait 66943 00:17:05.493 [2024-07-11 21:30:26.316396] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:17:05.752 00:17:05.752 real 0m4.559s 00:17:05.752 user 0m8.642s 00:17:05.752 sys 0m0.356s 00:17:05.752 21:30:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.752 ************************************ 00:17:05.752 END TEST event_scheduler 00:17:05.752 ************************************ 00:17:05.752 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:17:05.752 21:30:26 -- event/event.sh@51 -- # modprobe -n nbd 00:17:05.752 21:30:26 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:17:05.752 21:30:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:05.752 21:30:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:05.752 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:17:05.752 ************************************ 00:17:05.752 START TEST app_repeat 00:17:05.752 ************************************ 00:17:05.752 21:30:26 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:17:05.752 21:30:26 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.752 21:30:26 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:05.752 21:30:26 -- event/event.sh@13 -- # local nbd_list 00:17:05.752 21:30:26 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:05.752 21:30:26 -- event/event.sh@14 -- # local bdev_list 00:17:05.752 21:30:26 -- event/event.sh@15 -- # local repeat_times=4 00:17:05.752 21:30:26 -- event/event.sh@17 -- # modprobe nbd 00:17:05.752 Process app_repeat pid: 67043 00:17:05.752 21:30:26 -- event/event.sh@19 -- # repeat_pid=67043 00:17:05.752 21:30:26 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:17:05.752 21:30:26 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:17:05.752 21:30:26 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 67043' 00:17:05.752 21:30:26 -- event/event.sh@23 -- # for i in {0..2} 00:17:05.752 spdk_app_start Round 0 00:17:05.752 21:30:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:17:05.752 21:30:26 -- event/event.sh@25 -- # waitforlisten 67043 /var/tmp/spdk-nbd.sock 00:17:05.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:05.752 21:30:26 -- common/autotest_common.sh@819 -- # '[' -z 67043 ']' 00:17:05.752 21:30:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:05.752 21:30:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.752 21:30:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:05.752 21:30:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.752 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:17:05.752 [2024-07-11 21:30:26.618776] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:05.752 [2024-07-11 21:30:26.618888] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67043 ] 00:17:06.022 [2024-07-11 21:30:26.752597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:06.022 [2024-07-11 21:30:26.842760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.022 [2024-07-11 21:30:26.842769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.994 21:30:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.994 21:30:27 -- common/autotest_common.sh@852 -- # return 0 00:17:06.994 21:30:27 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:06.994 Malloc0 00:17:06.994 21:30:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:07.252 Malloc1 00:17:07.252 21:30:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@12 -- # local i 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.252 21:30:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:07.819 /dev/nbd0 00:17:07.819 21:30:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.819 21:30:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.819 21:30:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:17:07.819 21:30:28 -- common/autotest_common.sh@857 -- # local i 00:17:07.819 21:30:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:07.819 21:30:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:07.819 21:30:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:17:07.819 21:30:28 -- common/autotest_common.sh@861 -- # break 00:17:07.819 21:30:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:07.819 21:30:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:07.819 21:30:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:07.819 1+0 records in 00:17:07.819 1+0 records out 00:17:07.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553857 s, 7.4 MB/s 00:17:07.819 21:30:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:07.819 21:30:28 -- common/autotest_common.sh@874 -- # size=4096 00:17:07.819 21:30:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:07.819 21:30:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:07.819 21:30:28 -- common/autotest_common.sh@877 -- # return 0 00:17:07.819 21:30:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.819 21:30:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.819 21:30:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:07.819 /dev/nbd1 00:17:07.819 21:30:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:07.819 21:30:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:07.819 21:30:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:17:07.819 21:30:28 -- common/autotest_common.sh@857 -- # local i 00:17:07.819 21:30:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:07.819 21:30:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:07.819 21:30:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:17:08.079 21:30:28 -- common/autotest_common.sh@861 -- # break 00:17:08.079 21:30:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:08.079 21:30:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:08.079 21:30:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:08.079 1+0 records in 00:17:08.079 1+0 records out 00:17:08.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439184 s, 9.3 MB/s 00:17:08.079 21:30:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:08.079 21:30:28 -- common/autotest_common.sh@874 -- # size=4096 00:17:08.079 21:30:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:08.079 21:30:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:08.079 21:30:28 -- common/autotest_common.sh@877 -- # return 0 00:17:08.079 21:30:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:08.079 21:30:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:08.079 21:30:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:08.079 21:30:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.079 21:30:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:08.338 { 00:17:08.338 "nbd_device": "/dev/nbd0", 00:17:08.338 "bdev_name": "Malloc0" 00:17:08.338 }, 00:17:08.338 { 00:17:08.338 "nbd_device": "/dev/nbd1", 00:17:08.338 "bdev_name": "Malloc1" 00:17:08.338 } 00:17:08.338 ]' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:08.338 { 00:17:08.338 "nbd_device": "/dev/nbd0", 00:17:08.338 "bdev_name": "Malloc0" 00:17:08.338 }, 00:17:08.338 { 00:17:08.338 "nbd_device": "/dev/nbd1", 00:17:08.338 "bdev_name": "Malloc1" 00:17:08.338 } 00:17:08.338 ]' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:08.338 /dev/nbd1' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:08.338 /dev/nbd1' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@65 -- # count=2 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@95 -- # count=2 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:08.338 256+0 records in 00:17:08.338 256+0 records out 00:17:08.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107855 s, 97.2 MB/s 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:08.338 256+0 records in 00:17:08.338 256+0 records out 00:17:08.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320752 s, 32.7 MB/s 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:08.338 256+0 records in 00:17:08.338 256+0 records out 00:17:08.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0351446 s, 29.8 MB/s 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@51 -- # local i 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.338 21:30:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@41 -- # break 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.597 21:30:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@41 -- # break 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.857 21:30:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@65 -- # true 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@65 -- # count=0 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@104 -- # count=0 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:09.424 21:30:30 -- bdev/nbd_common.sh@109 -- # return 0 00:17:09.424 21:30:30 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:09.683 21:30:30 -- event/event.sh@35 -- # sleep 3 00:17:09.683 [2024-07-11 21:30:30.609967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.941 [2024-07-11 21:30:30.707272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.941 [2024-07-11 21:30:30.707282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.941 [2024-07-11 21:30:30.762925] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:09.941 [2024-07-11 21:30:30.762998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:13.225 spdk_app_start Round 1 00:17:13.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:13.225 21:30:33 -- event/event.sh@23 -- # for i in {0..2} 00:17:13.225 21:30:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:17:13.225 21:30:33 -- event/event.sh@25 -- # waitforlisten 67043 /var/tmp/spdk-nbd.sock 00:17:13.225 21:30:33 -- common/autotest_common.sh@819 -- # '[' -z 67043 ']' 00:17:13.225 21:30:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:13.225 21:30:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.225 21:30:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:13.225 21:30:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.225 21:30:33 -- common/autotest_common.sh@10 -- # set +x 00:17:13.225 21:30:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.225 21:30:33 -- common/autotest_common.sh@852 -- # return 0 00:17:13.225 21:30:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:13.225 Malloc0 00:17:13.225 21:30:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:13.225 Malloc1 00:17:13.484 21:30:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:13.484 21:30:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.484 21:30:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:13.484 21:30:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:13.484 21:30:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@12 -- # local i 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.485 21:30:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:13.743 /dev/nbd0 00:17:13.743 21:30:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:13.743 21:30:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:13.743 21:30:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:17:13.743 21:30:34 -- common/autotest_common.sh@857 -- # local i 00:17:13.743 21:30:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:13.743 21:30:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:13.743 21:30:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:17:13.743 21:30:34 -- common/autotest_common.sh@861 -- # break 00:17:13.743 21:30:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:13.743 21:30:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:13.743 21:30:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:13.743 1+0 records in 00:17:13.744 1+0 records out 00:17:13.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477034 s, 8.6 MB/s 00:17:13.744 21:30:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:13.744 21:30:34 -- common/autotest_common.sh@874 -- # size=4096 00:17:13.744 21:30:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:13.744 21:30:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:13.744 21:30:34 -- common/autotest_common.sh@877 -- # return 0 00:17:13.744 21:30:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.744 21:30:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.744 21:30:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:14.002 /dev/nbd1 00:17:14.002 21:30:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:14.002 21:30:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:14.002 21:30:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:17:14.002 21:30:34 -- common/autotest_common.sh@857 -- # local i 00:17:14.002 21:30:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:14.002 21:30:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:14.002 21:30:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:17:14.002 21:30:34 -- common/autotest_common.sh@861 -- # break 00:17:14.002 21:30:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:14.002 21:30:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:14.002 21:30:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:14.002 1+0 records in 00:17:14.002 1+0 records out 00:17:14.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356269 s, 11.5 MB/s 00:17:14.002 21:30:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:14.002 21:30:34 -- common/autotest_common.sh@874 -- # size=4096 00:17:14.002 21:30:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:14.002 21:30:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:14.002 21:30:34 -- common/autotest_common.sh@877 -- # return 0 00:17:14.002 21:30:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.002 21:30:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:14.002 21:30:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:14.002 21:30:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:14.002 21:30:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:14.261 { 00:17:14.261 "nbd_device": "/dev/nbd0", 00:17:14.261 "bdev_name": "Malloc0" 00:17:14.261 }, 00:17:14.261 { 00:17:14.261 "nbd_device": "/dev/nbd1", 00:17:14.261 "bdev_name": "Malloc1" 00:17:14.261 } 00:17:14.261 ]' 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:14.261 { 00:17:14.261 "nbd_device": "/dev/nbd0", 00:17:14.261 "bdev_name": "Malloc0" 00:17:14.261 }, 00:17:14.261 { 00:17:14.261 "nbd_device": "/dev/nbd1", 00:17:14.261 "bdev_name": "Malloc1" 00:17:14.261 } 00:17:14.261 ]' 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:14.261 /dev/nbd1' 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:14.261 /dev/nbd1' 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@65 -- # count=2 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@95 -- # count=2 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:14.261 256+0 records in 00:17:14.261 256+0 records out 00:17:14.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0083858 s, 125 MB/s 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:14.261 256+0 records in 00:17:14.261 256+0 records out 00:17:14.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021778 s, 48.1 MB/s 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:14.261 21:30:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:14.524 256+0 records in 00:17:14.524 256+0 records out 00:17:14.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222663 s, 47.1 MB/s 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@51 -- # local i 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.524 21:30:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@41 -- # break 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.792 21:30:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@41 -- # break 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:15.051 21:30:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@65 -- # true 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@65 -- # count=0 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@104 -- # count=0 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:15.310 21:30:36 -- bdev/nbd_common.sh@109 -- # return 0 00:17:15.310 21:30:36 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:15.568 21:30:36 -- event/event.sh@35 -- # sleep 3 00:17:15.568 [2024-07-11 21:30:36.506668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.827 [2024-07-11 21:30:36.590316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.827 [2024-07-11 21:30:36.590329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.827 [2024-07-11 21:30:36.645843] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:15.827 [2024-07-11 21:30:36.645920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:19.111 spdk_app_start Round 2 00:17:19.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:19.111 21:30:39 -- event/event.sh@23 -- # for i in {0..2} 00:17:19.111 21:30:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:17:19.111 21:30:39 -- event/event.sh@25 -- # waitforlisten 67043 /var/tmp/spdk-nbd.sock 00:17:19.111 21:30:39 -- common/autotest_common.sh@819 -- # '[' -z 67043 ']' 00:17:19.111 21:30:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:19.111 21:30:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:19.111 21:30:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:19.111 21:30:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:19.111 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:17:19.111 21:30:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.111 21:30:39 -- common/autotest_common.sh@852 -- # return 0 00:17:19.111 21:30:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:19.111 Malloc0 00:17:19.111 21:30:39 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:19.111 Malloc1 00:17:19.369 21:30:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@12 -- # local i 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.369 21:30:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:19.627 /dev/nbd0 00:17:19.627 21:30:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.627 21:30:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.627 21:30:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:17:19.627 21:30:40 -- common/autotest_common.sh@857 -- # local i 00:17:19.627 21:30:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:19.627 21:30:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:19.627 21:30:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:17:19.627 21:30:40 -- common/autotest_common.sh@861 -- # break 00:17:19.627 21:30:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:19.627 21:30:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:19.627 21:30:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:19.627 1+0 records in 00:17:19.627 1+0 records out 00:17:19.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465785 s, 8.8 MB/s 00:17:19.627 21:30:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:19.627 21:30:40 -- common/autotest_common.sh@874 -- # size=4096 00:17:19.627 21:30:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:19.627 21:30:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:19.627 21:30:40 -- common/autotest_common.sh@877 -- # return 0 00:17:19.627 21:30:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.627 21:30:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.627 21:30:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:19.885 /dev/nbd1 00:17:19.885 21:30:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:19.885 21:30:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:19.885 21:30:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:17:19.885 21:30:40 -- common/autotest_common.sh@857 -- # local i 00:17:19.885 21:30:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:17:19.885 21:30:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:17:19.885 21:30:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:17:19.885 21:30:40 -- common/autotest_common.sh@861 -- # break 00:17:19.885 21:30:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:19.885 21:30:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:19.885 21:30:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:19.885 1+0 records in 00:17:19.886 1+0 records out 00:17:19.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317209 s, 12.9 MB/s 00:17:19.886 21:30:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:19.886 21:30:40 -- common/autotest_common.sh@874 -- # size=4096 00:17:19.886 21:30:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:19.886 21:30:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:17:19.886 21:30:40 -- common/autotest_common.sh@877 -- # return 0 00:17:19.886 21:30:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.886 21:30:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.886 21:30:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:19.886 21:30:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.886 21:30:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:20.144 { 00:17:20.144 "nbd_device": "/dev/nbd0", 00:17:20.144 "bdev_name": "Malloc0" 00:17:20.144 }, 00:17:20.144 { 00:17:20.144 "nbd_device": "/dev/nbd1", 00:17:20.144 "bdev_name": "Malloc1" 00:17:20.144 } 00:17:20.144 ]' 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:20.144 { 00:17:20.144 "nbd_device": "/dev/nbd0", 00:17:20.144 "bdev_name": "Malloc0" 00:17:20.144 }, 00:17:20.144 { 00:17:20.144 "nbd_device": "/dev/nbd1", 00:17:20.144 "bdev_name": "Malloc1" 00:17:20.144 } 00:17:20.144 ]' 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:20.144 /dev/nbd1' 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:20.144 /dev/nbd1' 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@65 -- # count=2 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@95 -- # count=2 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:20.144 256+0 records in 00:17:20.144 256+0 records out 00:17:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104904 s, 100 MB/s 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:20.144 21:30:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:20.144 256+0 records in 00:17:20.144 256+0 records out 00:17:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262708 s, 39.9 MB/s 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:20.144 256+0 records in 00:17:20.144 256+0 records out 00:17:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257809 s, 40.7 MB/s 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@51 -- # local i 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.144 21:30:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@41 -- # break 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.403 21:30:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@41 -- # break 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:20.969 21:30:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@65 -- # true 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@65 -- # count=0 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@104 -- # count=0 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:21.228 21:30:41 -- bdev/nbd_common.sh@109 -- # return 0 00:17:21.228 21:30:41 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:21.486 21:30:42 -- event/event.sh@35 -- # sleep 3 00:17:21.486 [2024-07-11 21:30:42.427282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:21.745 [2024-07-11 21:30:42.500160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.745 [2024-07-11 21:30:42.500174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.745 [2024-07-11 21:30:42.557264] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:21.745 [2024-07-11 21:30:42.557314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:25.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:25.034 21:30:45 -- event/event.sh@38 -- # waitforlisten 67043 /var/tmp/spdk-nbd.sock 00:17:25.034 21:30:45 -- common/autotest_common.sh@819 -- # '[' -z 67043 ']' 00:17:25.034 21:30:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:25.034 21:30:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.034 21:30:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:25.034 21:30:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.034 21:30:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.034 21:30:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.034 21:30:45 -- common/autotest_common.sh@852 -- # return 0 00:17:25.034 21:30:45 -- event/event.sh@39 -- # killprocess 67043 00:17:25.034 21:30:45 -- common/autotest_common.sh@926 -- # '[' -z 67043 ']' 00:17:25.034 21:30:45 -- common/autotest_common.sh@930 -- # kill -0 67043 00:17:25.034 21:30:45 -- common/autotest_common.sh@931 -- # uname 00:17:25.034 21:30:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.034 21:30:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67043 00:17:25.034 21:30:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:25.034 21:30:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:25.034 21:30:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67043' 00:17:25.034 killing process with pid 67043 00:17:25.034 21:30:45 -- common/autotest_common.sh@945 -- # kill 67043 00:17:25.034 21:30:45 -- common/autotest_common.sh@950 -- # wait 67043 00:17:25.034 spdk_app_start is called in Round 0. 00:17:25.034 Shutdown signal received, stop current app iteration 00:17:25.034 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:17:25.034 spdk_app_start is called in Round 1. 00:17:25.034 Shutdown signal received, stop current app iteration 00:17:25.034 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:17:25.034 spdk_app_start is called in Round 2. 00:17:25.034 Shutdown signal received, stop current app iteration 00:17:25.034 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:17:25.034 spdk_app_start is called in Round 3. 00:17:25.034 Shutdown signal received, stop current app iteration 00:17:25.034 ************************************ 00:17:25.034 END TEST app_repeat 00:17:25.034 ************************************ 00:17:25.034 21:30:45 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:17:25.034 21:30:45 -- event/event.sh@42 -- # return 0 00:17:25.034 00:17:25.034 real 0m19.127s 00:17:25.034 user 0m42.805s 00:17:25.034 sys 0m3.046s 00:17:25.034 21:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.034 21:30:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.034 21:30:45 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:17:25.034 21:30:45 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:25.034 21:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:25.034 21:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:25.034 21:30:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.034 ************************************ 00:17:25.034 START TEST cpu_locks 00:17:25.034 ************************************ 00:17:25.034 21:30:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:25.034 * Looking for test storage... 00:17:25.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:25.034 21:30:45 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:17:25.034 21:30:45 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:17:25.034 21:30:45 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:17:25.034 21:30:45 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:17:25.034 21:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:25.034 21:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:25.034 21:30:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.034 ************************************ 00:17:25.034 START TEST default_locks 00:17:25.034 ************************************ 00:17:25.034 21:30:45 -- common/autotest_common.sh@1104 -- # default_locks 00:17:25.034 21:30:45 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67480 00:17:25.034 21:30:45 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:25.034 21:30:45 -- event/cpu_locks.sh@47 -- # waitforlisten 67480 00:17:25.034 21:30:45 -- common/autotest_common.sh@819 -- # '[' -z 67480 ']' 00:17:25.034 21:30:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.034 21:30:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.034 21:30:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.034 21:30:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.034 21:30:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.034 [2024-07-11 21:30:45.911218] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:25.034 [2024-07-11 21:30:45.911554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67480 ] 00:17:25.292 [2024-07-11 21:30:46.052581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.292 [2024-07-11 21:30:46.148143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.292 [2024-07-11 21:30:46.148314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.226 21:30:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.226 21:30:46 -- common/autotest_common.sh@852 -- # return 0 00:17:26.226 21:30:46 -- event/cpu_locks.sh@49 -- # locks_exist 67480 00:17:26.226 21:30:46 -- event/cpu_locks.sh@22 -- # lslocks -p 67480 00:17:26.226 21:30:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:26.484 21:30:47 -- event/cpu_locks.sh@50 -- # killprocess 67480 00:17:26.484 21:30:47 -- common/autotest_common.sh@926 -- # '[' -z 67480 ']' 00:17:26.484 21:30:47 -- common/autotest_common.sh@930 -- # kill -0 67480 00:17:26.484 21:30:47 -- common/autotest_common.sh@931 -- # uname 00:17:26.484 21:30:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.484 21:30:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67480 00:17:26.484 killing process with pid 67480 00:17:26.484 21:30:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:26.484 21:30:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:26.484 21:30:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67480' 00:17:26.484 21:30:47 -- common/autotest_common.sh@945 -- # kill 67480 00:17:26.484 21:30:47 -- common/autotest_common.sh@950 -- # wait 67480 00:17:26.742 21:30:47 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67480 00:17:26.742 21:30:47 -- common/autotest_common.sh@640 -- # local es=0 00:17:26.742 21:30:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67480 00:17:26.742 21:30:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:17:26.742 21:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.742 21:30:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:17:26.742 21:30:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.742 21:30:47 -- common/autotest_common.sh@643 -- # waitforlisten 67480 00:17:26.742 21:30:47 -- common/autotest_common.sh@819 -- # '[' -z 67480 ']' 00:17:26.742 21:30:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.742 21:30:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:26.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.742 21:30:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.742 ERROR: process (pid: 67480) is no longer running 00:17:26.742 21:30:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:26.742 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67480) - No such process 00:17:26.742 21:30:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.742 21:30:47 -- common/autotest_common.sh@852 -- # return 1 00:17:26.742 21:30:47 -- common/autotest_common.sh@643 -- # es=1 00:17:26.742 21:30:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:26.742 21:30:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:26.742 21:30:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:26.742 21:30:47 -- event/cpu_locks.sh@54 -- # no_locks 00:17:26.742 21:30:47 -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:26.742 21:30:47 -- event/cpu_locks.sh@26 -- # local lock_files 00:17:26.742 ************************************ 00:17:26.742 END TEST default_locks 00:17:26.742 ************************************ 00:17:26.742 21:30:47 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:26.742 00:17:26.742 real 0m1.791s 00:17:26.742 user 0m1.855s 00:17:26.742 sys 0m0.549s 00:17:26.742 21:30:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.742 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 21:30:47 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:17:26.742 21:30:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:26.742 21:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:26.742 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 ************************************ 00:17:26.742 START TEST default_locks_via_rpc 00:17:26.742 ************************************ 00:17:26.742 21:30:47 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:17:26.742 21:30:47 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67532 00:17:26.742 21:30:47 -- event/cpu_locks.sh@63 -- # waitforlisten 67532 00:17:26.742 21:30:47 -- common/autotest_common.sh@819 -- # '[' -z 67532 ']' 00:17:26.742 21:30:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.742 21:30:47 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:26.742 21:30:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:26.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.742 21:30:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.742 21:30:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:26.742 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:17:27.000 [2024-07-11 21:30:47.738873] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:27.000 [2024-07-11 21:30:47.738980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67532 ] 00:17:27.000 [2024-07-11 21:30:47.873756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.259 [2024-07-11 21:30:47.964861] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:27.259 [2024-07-11 21:30:47.965082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.911 21:30:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:27.911 21:30:48 -- common/autotest_common.sh@852 -- # return 0 00:17:27.911 21:30:48 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:17:27.911 21:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.911 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:17:27.911 21:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.911 21:30:48 -- event/cpu_locks.sh@67 -- # no_locks 00:17:27.911 21:30:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:27.911 21:30:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:17:27.911 21:30:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:27.911 21:30:48 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:17:27.911 21:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.911 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:17:27.911 21:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.911 21:30:48 -- event/cpu_locks.sh@71 -- # locks_exist 67532 00:17:27.911 21:30:48 -- event/cpu_locks.sh@22 -- # lslocks -p 67532 00:17:27.911 21:30:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:28.479 21:30:49 -- event/cpu_locks.sh@73 -- # killprocess 67532 00:17:28.479 21:30:49 -- common/autotest_common.sh@926 -- # '[' -z 67532 ']' 00:17:28.479 21:30:49 -- common/autotest_common.sh@930 -- # kill -0 67532 00:17:28.479 21:30:49 -- common/autotest_common.sh@931 -- # uname 00:17:28.479 21:30:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.479 21:30:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67532 00:17:28.479 killing process with pid 67532 00:17:28.479 21:30:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.479 21:30:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.479 21:30:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67532' 00:17:28.479 21:30:49 -- common/autotest_common.sh@945 -- # kill 67532 00:17:28.479 21:30:49 -- common/autotest_common.sh@950 -- # wait 67532 00:17:28.737 ************************************ 00:17:28.737 END TEST default_locks_via_rpc 00:17:28.737 ************************************ 00:17:28.737 00:17:28.737 real 0m1.958s 00:17:28.737 user 0m2.116s 00:17:28.737 sys 0m0.593s 00:17:28.737 21:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.737 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:17:28.737 21:30:49 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:17:28.737 21:30:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:28.737 21:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:28.737 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:17:28.995 ************************************ 00:17:28.995 START TEST non_locking_app_on_locked_coremask 00:17:28.996 ************************************ 00:17:28.996 21:30:49 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:17:28.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.996 21:30:49 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67583 00:17:28.996 21:30:49 -- event/cpu_locks.sh@81 -- # waitforlisten 67583 /var/tmp/spdk.sock 00:17:28.996 21:30:49 -- common/autotest_common.sh@819 -- # '[' -z 67583 ']' 00:17:28.996 21:30:49 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:28.996 21:30:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.996 21:30:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.996 21:30:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.996 21:30:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.996 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:17:28.996 [2024-07-11 21:30:49.755299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:28.996 [2024-07-11 21:30:49.755656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67583 ] 00:17:28.996 [2024-07-11 21:30:49.895911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.254 [2024-07-11 21:30:49.997422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.254 [2024-07-11 21:30:49.997997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:29.819 21:30:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.819 21:30:50 -- common/autotest_common.sh@852 -- # return 0 00:17:29.819 21:30:50 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67599 00:17:29.819 21:30:50 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:17:29.819 21:30:50 -- event/cpu_locks.sh@85 -- # waitforlisten 67599 /var/tmp/spdk2.sock 00:17:29.819 21:30:50 -- common/autotest_common.sh@819 -- # '[' -z 67599 ']' 00:17:29.819 21:30:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:29.819 21:30:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.819 21:30:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:29.819 21:30:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.819 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:17:29.819 [2024-07-11 21:30:50.760552] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:29.819 [2024-07-11 21:30:50.760712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67599 ] 00:17:30.075 [2024-07-11 21:30:50.906122] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:30.075 [2024-07-11 21:30:50.906185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.331 [2024-07-11 21:30:51.104429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.331 [2024-07-11 21:30:51.104669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.895 21:30:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.895 21:30:51 -- common/autotest_common.sh@852 -- # return 0 00:17:30.895 21:30:51 -- event/cpu_locks.sh@87 -- # locks_exist 67583 00:17:30.895 21:30:51 -- event/cpu_locks.sh@22 -- # lslocks -p 67583 00:17:30.895 21:30:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:31.856 21:30:52 -- event/cpu_locks.sh@89 -- # killprocess 67583 00:17:31.856 21:30:52 -- common/autotest_common.sh@926 -- # '[' -z 67583 ']' 00:17:31.856 21:30:52 -- common/autotest_common.sh@930 -- # kill -0 67583 00:17:31.856 21:30:52 -- common/autotest_common.sh@931 -- # uname 00:17:31.856 21:30:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:31.856 21:30:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67583 00:17:31.856 killing process with pid 67583 00:17:31.856 21:30:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:31.856 21:30:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:31.856 21:30:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67583' 00:17:31.856 21:30:52 -- common/autotest_common.sh@945 -- # kill 67583 00:17:31.856 21:30:52 -- common/autotest_common.sh@950 -- # wait 67583 00:17:32.787 21:30:53 -- event/cpu_locks.sh@90 -- # killprocess 67599 00:17:32.787 21:30:53 -- common/autotest_common.sh@926 -- # '[' -z 67599 ']' 00:17:32.787 21:30:53 -- common/autotest_common.sh@930 -- # kill -0 67599 00:17:32.787 21:30:53 -- common/autotest_common.sh@931 -- # uname 00:17:32.787 21:30:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.787 21:30:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67599 00:17:32.787 killing process with pid 67599 00:17:32.787 21:30:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:32.787 21:30:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:32.787 21:30:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67599' 00:17:32.787 21:30:53 -- common/autotest_common.sh@945 -- # kill 67599 00:17:32.787 21:30:53 -- common/autotest_common.sh@950 -- # wait 67599 00:17:33.045 ************************************ 00:17:33.045 END TEST non_locking_app_on_locked_coremask 00:17:33.045 ************************************ 00:17:33.045 00:17:33.045 real 0m4.157s 00:17:33.045 user 0m4.592s 00:17:33.045 sys 0m1.157s 00:17:33.045 21:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.045 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:17:33.045 21:30:53 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:17:33.045 21:30:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:33.045 21:30:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:33.045 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:17:33.045 ************************************ 00:17:33.045 START TEST locking_app_on_unlocked_coremask 00:17:33.045 ************************************ 00:17:33.045 21:30:53 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:17:33.045 21:30:53 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67668 00:17:33.045 21:30:53 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:17:33.045 21:30:53 -- event/cpu_locks.sh@99 -- # waitforlisten 67668 /var/tmp/spdk.sock 00:17:33.045 21:30:53 -- common/autotest_common.sh@819 -- # '[' -z 67668 ']' 00:17:33.045 21:30:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.045 21:30:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.045 21:30:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.045 21:30:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.045 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:17:33.045 [2024-07-11 21:30:53.955455] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:33.045 [2024-07-11 21:30:53.955825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67668 ] 00:17:33.301 [2024-07-11 21:30:54.092751] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:33.301 [2024-07-11 21:30:54.092847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.301 [2024-07-11 21:30:54.191893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.301 [2024-07-11 21:30:54.192071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:34.231 21:30:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.231 21:30:55 -- common/autotest_common.sh@852 -- # return 0 00:17:34.231 21:30:55 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67684 00:17:34.231 21:30:55 -- event/cpu_locks.sh@103 -- # waitforlisten 67684 /var/tmp/spdk2.sock 00:17:34.231 21:30:55 -- common/autotest_common.sh@819 -- # '[' -z 67684 ']' 00:17:34.231 21:30:55 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:34.231 21:30:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:34.231 21:30:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.231 21:30:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:34.231 21:30:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.231 21:30:55 -- common/autotest_common.sh@10 -- # set +x 00:17:34.231 [2024-07-11 21:30:55.068206] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:34.231 [2024-07-11 21:30:55.068643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67684 ] 00:17:34.488 [2024-07-11 21:30:55.212084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.488 [2024-07-11 21:30:55.411106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.488 [2024-07-11 21:30:55.411291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.420 21:30:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:35.420 21:30:56 -- common/autotest_common.sh@852 -- # return 0 00:17:35.420 21:30:56 -- event/cpu_locks.sh@105 -- # locks_exist 67684 00:17:35.420 21:30:56 -- event/cpu_locks.sh@22 -- # lslocks -p 67684 00:17:35.420 21:30:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:36.353 21:30:57 -- event/cpu_locks.sh@107 -- # killprocess 67668 00:17:36.353 21:30:57 -- common/autotest_common.sh@926 -- # '[' -z 67668 ']' 00:17:36.353 21:30:57 -- common/autotest_common.sh@930 -- # kill -0 67668 00:17:36.353 21:30:57 -- common/autotest_common.sh@931 -- # uname 00:17:36.353 21:30:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.353 21:30:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67668 00:17:36.353 killing process with pid 67668 00:17:36.353 21:30:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:36.353 21:30:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:36.353 21:30:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67668' 00:17:36.353 21:30:57 -- common/autotest_common.sh@945 -- # kill 67668 00:17:36.353 21:30:57 -- common/autotest_common.sh@950 -- # wait 67668 00:17:37.286 21:30:57 -- event/cpu_locks.sh@108 -- # killprocess 67684 00:17:37.286 21:30:57 -- common/autotest_common.sh@926 -- # '[' -z 67684 ']' 00:17:37.286 21:30:57 -- common/autotest_common.sh@930 -- # kill -0 67684 00:17:37.286 21:30:57 -- common/autotest_common.sh@931 -- # uname 00:17:37.286 21:30:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:37.286 21:30:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67684 00:17:37.286 killing process with pid 67684 00:17:37.286 21:30:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:37.286 21:30:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:37.286 21:30:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67684' 00:17:37.286 21:30:57 -- common/autotest_common.sh@945 -- # kill 67684 00:17:37.286 21:30:57 -- common/autotest_common.sh@950 -- # wait 67684 00:17:37.544 00:17:37.544 real 0m4.390s 00:17:37.544 user 0m4.989s 00:17:37.544 sys 0m1.163s 00:17:37.544 21:30:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.544 21:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:37.544 ************************************ 00:17:37.544 END TEST locking_app_on_unlocked_coremask 00:17:37.544 ************************************ 00:17:37.544 21:30:58 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:17:37.544 21:30:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:37.544 21:30:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:37.544 21:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:37.544 ************************************ 00:17:37.544 START TEST locking_app_on_locked_coremask 00:17:37.544 ************************************ 00:17:37.544 21:30:58 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:17:37.544 21:30:58 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67751 00:17:37.544 21:30:58 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:37.544 21:30:58 -- event/cpu_locks.sh@116 -- # waitforlisten 67751 /var/tmp/spdk.sock 00:17:37.544 21:30:58 -- common/autotest_common.sh@819 -- # '[' -z 67751 ']' 00:17:37.544 21:30:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.544 21:30:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:37.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.544 21:30:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.544 21:30:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:37.544 21:30:58 -- common/autotest_common.sh@10 -- # set +x 00:17:37.544 [2024-07-11 21:30:58.399015] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:37.544 [2024-07-11 21:30:58.399138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67751 ] 00:17:37.819 [2024-07-11 21:30:58.540257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.819 [2024-07-11 21:30:58.619075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:37.819 [2024-07-11 21:30:58.619246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.752 21:30:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.752 21:30:59 -- common/autotest_common.sh@852 -- # return 0 00:17:38.752 21:30:59 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67767 00:17:38.752 21:30:59 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67767 /var/tmp/spdk2.sock 00:17:38.752 21:30:59 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:38.752 21:30:59 -- common/autotest_common.sh@640 -- # local es=0 00:17:38.752 21:30:59 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67767 /var/tmp/spdk2.sock 00:17:38.752 21:30:59 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:17:38.752 21:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:38.752 21:30:59 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:17:38.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:38.752 21:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:38.752 21:30:59 -- common/autotest_common.sh@643 -- # waitforlisten 67767 /var/tmp/spdk2.sock 00:17:38.752 21:30:59 -- common/autotest_common.sh@819 -- # '[' -z 67767 ']' 00:17:38.752 21:30:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:38.752 21:30:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:38.752 21:30:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:38.752 21:30:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:38.752 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:17:38.752 [2024-07-11 21:30:59.406327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:38.752 [2024-07-11 21:30:59.406767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67767 ] 00:17:38.752 [2024-07-11 21:30:59.556022] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67751 has claimed it. 00:17:38.752 [2024-07-11 21:30:59.556115] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:39.318 ERROR: process (pid: 67767) is no longer running 00:17:39.318 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67767) - No such process 00:17:39.318 21:31:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:39.318 21:31:00 -- common/autotest_common.sh@852 -- # return 1 00:17:39.318 21:31:00 -- common/autotest_common.sh@643 -- # es=1 00:17:39.318 21:31:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:39.318 21:31:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:39.318 21:31:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:39.318 21:31:00 -- event/cpu_locks.sh@122 -- # locks_exist 67751 00:17:39.318 21:31:00 -- event/cpu_locks.sh@22 -- # lslocks -p 67751 00:17:39.318 21:31:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:39.886 21:31:00 -- event/cpu_locks.sh@124 -- # killprocess 67751 00:17:39.886 21:31:00 -- common/autotest_common.sh@926 -- # '[' -z 67751 ']' 00:17:39.886 21:31:00 -- common/autotest_common.sh@930 -- # kill -0 67751 00:17:39.886 21:31:00 -- common/autotest_common.sh@931 -- # uname 00:17:39.886 21:31:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.886 21:31:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67751 00:17:39.886 killing process with pid 67751 00:17:39.886 21:31:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:39.886 21:31:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:39.886 21:31:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67751' 00:17:39.886 21:31:00 -- common/autotest_common.sh@945 -- # kill 67751 00:17:39.886 21:31:00 -- common/autotest_common.sh@950 -- # wait 67751 00:17:40.144 00:17:40.144 real 0m2.587s 00:17:40.144 user 0m2.988s 00:17:40.144 sys 0m0.644s 00:17:40.144 21:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.144 ************************************ 00:17:40.144 END TEST locking_app_on_locked_coremask 00:17:40.144 ************************************ 00:17:40.144 21:31:00 -- common/autotest_common.sh@10 -- # set +x 00:17:40.144 21:31:00 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:17:40.144 21:31:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:40.145 21:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.145 21:31:00 -- common/autotest_common.sh@10 -- # set +x 00:17:40.145 ************************************ 00:17:40.145 START TEST locking_overlapped_coremask 00:17:40.145 ************************************ 00:17:40.145 21:31:00 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:17:40.145 21:31:00 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67813 00:17:40.145 21:31:00 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:40.145 21:31:00 -- event/cpu_locks.sh@133 -- # waitforlisten 67813 /var/tmp/spdk.sock 00:17:40.145 21:31:00 -- common/autotest_common.sh@819 -- # '[' -z 67813 ']' 00:17:40.145 21:31:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.145 21:31:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.145 21:31:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.145 21:31:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.145 21:31:00 -- common/autotest_common.sh@10 -- # set +x 00:17:40.145 [2024-07-11 21:31:01.035894] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:40.145 [2024-07-11 21:31:01.036232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67813 ] 00:17:40.403 [2024-07-11 21:31:01.178429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.403 [2024-07-11 21:31:01.276897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.403 [2024-07-11 21:31:01.277461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.403 [2024-07-11 21:31:01.277583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.403 [2024-07-11 21:31:01.277590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.336 21:31:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.336 21:31:02 -- common/autotest_common.sh@852 -- # return 0 00:17:41.336 21:31:02 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:17:41.336 21:31:02 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67832 00:17:41.336 21:31:02 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67832 /var/tmp/spdk2.sock 00:17:41.336 21:31:02 -- common/autotest_common.sh@640 -- # local es=0 00:17:41.336 21:31:02 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67832 /var/tmp/spdk2.sock 00:17:41.336 21:31:02 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:17:41.336 21:31:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:41.336 21:31:02 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:17:41.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:41.336 21:31:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:41.336 21:31:02 -- common/autotest_common.sh@643 -- # waitforlisten 67832 /var/tmp/spdk2.sock 00:17:41.336 21:31:02 -- common/autotest_common.sh@819 -- # '[' -z 67832 ']' 00:17:41.336 21:31:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:41.336 21:31:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.336 21:31:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:41.336 21:31:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.336 21:31:02 -- common/autotest_common.sh@10 -- # set +x 00:17:41.336 [2024-07-11 21:31:02.067326] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:41.336 [2024-07-11 21:31:02.067421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67832 ] 00:17:41.336 [2024-07-11 21:31:02.210271] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67813 has claimed it. 00:17:41.336 [2024-07-11 21:31:02.210346] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:41.902 ERROR: process (pid: 67832) is no longer running 00:17:41.902 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67832) - No such process 00:17:41.902 21:31:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.902 21:31:02 -- common/autotest_common.sh@852 -- # return 1 00:17:41.902 21:31:02 -- common/autotest_common.sh@643 -- # es=1 00:17:41.902 21:31:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:41.902 21:31:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:41.902 21:31:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:41.902 21:31:02 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:17:41.902 21:31:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:41.902 21:31:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:41.902 21:31:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:41.902 21:31:02 -- event/cpu_locks.sh@141 -- # killprocess 67813 00:17:41.902 21:31:02 -- common/autotest_common.sh@926 -- # '[' -z 67813 ']' 00:17:41.902 21:31:02 -- common/autotest_common.sh@930 -- # kill -0 67813 00:17:41.902 21:31:02 -- common/autotest_common.sh@931 -- # uname 00:17:41.902 21:31:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.902 21:31:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67813 00:17:42.160 21:31:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:42.160 21:31:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:42.160 21:31:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67813' 00:17:42.160 killing process with pid 67813 00:17:42.160 21:31:02 -- common/autotest_common.sh@945 -- # kill 67813 00:17:42.160 21:31:02 -- common/autotest_common.sh@950 -- # wait 67813 00:17:42.418 00:17:42.418 real 0m2.253s 00:17:42.418 user 0m6.345s 00:17:42.418 sys 0m0.429s 00:17:42.418 21:31:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.418 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:17:42.418 ************************************ 00:17:42.418 END TEST locking_overlapped_coremask 00:17:42.418 ************************************ 00:17:42.418 21:31:03 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:42.418 21:31:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:42.418 21:31:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:42.418 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:17:42.418 ************************************ 00:17:42.418 START TEST locking_overlapped_coremask_via_rpc 00:17:42.418 ************************************ 00:17:42.418 21:31:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:17:42.418 21:31:03 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67877 00:17:42.418 21:31:03 -- event/cpu_locks.sh@149 -- # waitforlisten 67877 /var/tmp/spdk.sock 00:17:42.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.418 21:31:03 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:42.418 21:31:03 -- common/autotest_common.sh@819 -- # '[' -z 67877 ']' 00:17:42.418 21:31:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.418 21:31:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.418 21:31:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.418 21:31:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.418 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:17:42.418 [2024-07-11 21:31:03.335237] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:42.418 [2024-07-11 21:31:03.335327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67877 ] 00:17:42.675 [2024-07-11 21:31:03.467109] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:42.675 [2024-07-11 21:31:03.467168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:42.675 [2024-07-11 21:31:03.540704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:42.675 [2024-07-11 21:31:03.541247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.675 [2024-07-11 21:31:03.541350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.675 [2024-07-11 21:31:03.541354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.608 21:31:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.608 21:31:04 -- common/autotest_common.sh@852 -- # return 0 00:17:43.608 21:31:04 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67894 00:17:43.608 21:31:04 -- event/cpu_locks.sh@153 -- # waitforlisten 67894 /var/tmp/spdk2.sock 00:17:43.608 21:31:04 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:43.608 21:31:04 -- common/autotest_common.sh@819 -- # '[' -z 67894 ']' 00:17:43.608 21:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:43.608 21:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:43.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:43.608 21:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:43.608 21:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:43.608 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:17:43.608 [2024-07-11 21:31:04.291667] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:43.608 [2024-07-11 21:31:04.291928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67894 ] 00:17:43.608 [2024-07-11 21:31:04.434048] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:43.608 [2024-07-11 21:31:04.434101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.866 [2024-07-11 21:31:04.607496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.866 [2024-07-11 21:31:04.607766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.866 [2024-07-11 21:31:04.607897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.867 [2024-07-11 21:31:04.607898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:44.434 21:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.434 21:31:05 -- common/autotest_common.sh@852 -- # return 0 00:17:44.434 21:31:05 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:44.434 21:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.434 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:17:44.434 21:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.434 21:31:05 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:44.434 21:31:05 -- common/autotest_common.sh@640 -- # local es=0 00:17:44.434 21:31:05 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:44.434 21:31:05 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:17:44.434 21:31:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.434 21:31:05 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:17:44.434 21:31:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.434 21:31:05 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:44.434 21:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.434 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:17:44.434 [2024-07-11 21:31:05.237608] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67877 has claimed it. 00:17:44.434 request: 00:17:44.434 { 00:17:44.434 "method": "framework_enable_cpumask_locks", 00:17:44.434 "req_id": 1 00:17:44.434 } 00:17:44.434 Got JSON-RPC error response 00:17:44.434 response: 00:17:44.434 { 00:17:44.434 "code": -32603, 00:17:44.434 "message": "Failed to claim CPU core: 2" 00:17:44.434 } 00:17:44.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.434 21:31:05 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:17:44.434 21:31:05 -- common/autotest_common.sh@643 -- # es=1 00:17:44.434 21:31:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:44.434 21:31:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:44.434 21:31:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:44.434 21:31:05 -- event/cpu_locks.sh@158 -- # waitforlisten 67877 /var/tmp/spdk.sock 00:17:44.434 21:31:05 -- common/autotest_common.sh@819 -- # '[' -z 67877 ']' 00:17:44.434 21:31:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.434 21:31:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.434 21:31:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.434 21:31:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.434 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:17:44.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:44.693 21:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.693 21:31:05 -- common/autotest_common.sh@852 -- # return 0 00:17:44.693 21:31:05 -- event/cpu_locks.sh@159 -- # waitforlisten 67894 /var/tmp/spdk2.sock 00:17:44.693 21:31:05 -- common/autotest_common.sh@819 -- # '[' -z 67894 ']' 00:17:44.693 21:31:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:44.693 21:31:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.693 21:31:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:44.693 21:31:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.693 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:17:44.951 ************************************ 00:17:44.951 END TEST locking_overlapped_coremask_via_rpc 00:17:44.951 ************************************ 00:17:44.951 21:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.951 21:31:05 -- common/autotest_common.sh@852 -- # return 0 00:17:44.951 21:31:05 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:44.951 21:31:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:44.951 21:31:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:44.951 21:31:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:44.951 00:17:44.951 real 0m2.483s 00:17:44.951 user 0m1.218s 00:17:44.951 sys 0m0.187s 00:17:44.951 21:31:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.951 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:17:44.951 21:31:05 -- event/cpu_locks.sh@174 -- # cleanup 00:17:44.951 21:31:05 -- event/cpu_locks.sh@15 -- # [[ -z 67877 ]] 00:17:44.951 21:31:05 -- event/cpu_locks.sh@15 -- # killprocess 67877 00:17:44.951 21:31:05 -- common/autotest_common.sh@926 -- # '[' -z 67877 ']' 00:17:44.951 21:31:05 -- common/autotest_common.sh@930 -- # kill -0 67877 00:17:44.951 21:31:05 -- common/autotest_common.sh@931 -- # uname 00:17:44.951 21:31:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:44.951 21:31:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67877 00:17:44.951 killing process with pid 67877 00:17:44.951 21:31:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:44.951 21:31:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:44.951 21:31:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67877' 00:17:44.951 21:31:05 -- common/autotest_common.sh@945 -- # kill 67877 00:17:44.951 21:31:05 -- common/autotest_common.sh@950 -- # wait 67877 00:17:45.518 21:31:06 -- event/cpu_locks.sh@16 -- # [[ -z 67894 ]] 00:17:45.518 21:31:06 -- event/cpu_locks.sh@16 -- # killprocess 67894 00:17:45.518 21:31:06 -- common/autotest_common.sh@926 -- # '[' -z 67894 ']' 00:17:45.518 21:31:06 -- common/autotest_common.sh@930 -- # kill -0 67894 00:17:45.518 21:31:06 -- common/autotest_common.sh@931 -- # uname 00:17:45.518 21:31:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:45.518 21:31:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67894 00:17:45.518 killing process with pid 67894 00:17:45.518 21:31:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:45.518 21:31:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:45.518 21:31:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67894' 00:17:45.518 21:31:06 -- common/autotest_common.sh@945 -- # kill 67894 00:17:45.518 21:31:06 -- common/autotest_common.sh@950 -- # wait 67894 00:17:45.776 21:31:06 -- event/cpu_locks.sh@18 -- # rm -f 00:17:45.776 21:31:06 -- event/cpu_locks.sh@1 -- # cleanup 00:17:45.776 21:31:06 -- event/cpu_locks.sh@15 -- # [[ -z 67877 ]] 00:17:45.776 21:31:06 -- event/cpu_locks.sh@15 -- # killprocess 67877 00:17:45.776 Process with pid 67877 is not found 00:17:45.776 Process with pid 67894 is not found 00:17:45.776 21:31:06 -- common/autotest_common.sh@926 -- # '[' -z 67877 ']' 00:17:45.776 21:31:06 -- common/autotest_common.sh@930 -- # kill -0 67877 00:17:45.776 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67877) - No such process 00:17:45.776 21:31:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67877 is not found' 00:17:45.776 21:31:06 -- event/cpu_locks.sh@16 -- # [[ -z 67894 ]] 00:17:45.776 21:31:06 -- event/cpu_locks.sh@16 -- # killprocess 67894 00:17:45.776 21:31:06 -- common/autotest_common.sh@926 -- # '[' -z 67894 ']' 00:17:45.776 21:31:06 -- common/autotest_common.sh@930 -- # kill -0 67894 00:17:45.776 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67894) - No such process 00:17:45.776 21:31:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67894 is not found' 00:17:45.776 21:31:06 -- event/cpu_locks.sh@18 -- # rm -f 00:17:45.776 ************************************ 00:17:45.776 END TEST cpu_locks 00:17:45.776 ************************************ 00:17:45.776 00:17:45.776 real 0m20.822s 00:17:45.776 user 0m35.879s 00:17:45.776 sys 0m5.537s 00:17:45.776 21:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.776 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:45.776 ************************************ 00:17:45.776 END TEST event 00:17:45.776 ************************************ 00:17:45.776 00:17:45.776 real 0m48.860s 00:17:45.777 user 1m33.911s 00:17:45.777 sys 0m9.351s 00:17:45.777 21:31:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.777 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:45.777 21:31:06 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:45.777 21:31:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:45.777 21:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:45.777 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:45.777 ************************************ 00:17:45.777 START TEST thread 00:17:45.777 ************************************ 00:17:45.777 21:31:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:46.035 * Looking for test storage... 00:17:46.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:46.035 21:31:06 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:46.035 21:31:06 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:17:46.035 21:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:46.035 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:17:46.035 ************************************ 00:17:46.035 START TEST thread_poller_perf 00:17:46.035 ************************************ 00:17:46.035 21:31:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:46.035 [2024-07-11 21:31:06.782271] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:46.035 [2024-07-11 21:31:06.782347] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68011 ] 00:17:46.035 [2024-07-11 21:31:06.918547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.293 [2024-07-11 21:31:07.013365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.293 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:47.228 ====================================== 00:17:47.228 busy:2215589886 (cyc) 00:17:47.228 total_run_count: 295000 00:17:47.228 tsc_hz: 2200000000 (cyc) 00:17:47.228 ====================================== 00:17:47.228 poller_cost: 7510 (cyc), 3413 (nsec) 00:17:47.228 00:17:47.228 real 0m1.320s 00:17:47.228 user 0m1.155s 00:17:47.228 sys 0m0.055s 00:17:47.228 21:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.228 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:47.228 ************************************ 00:17:47.228 END TEST thread_poller_perf 00:17:47.228 ************************************ 00:17:47.228 21:31:08 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:47.228 21:31:08 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:17:47.228 21:31:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:47.228 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:47.228 ************************************ 00:17:47.228 START TEST thread_poller_perf 00:17:47.228 ************************************ 00:17:47.228 21:31:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:47.228 [2024-07-11 21:31:08.154608] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:47.228 [2024-07-11 21:31:08.154704] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68049 ] 00:17:47.486 [2024-07-11 21:31:08.288686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.486 [2024-07-11 21:31:08.366382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.486 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:48.860 ====================================== 00:17:48.860 busy:2203066753 (cyc) 00:17:48.860 total_run_count: 4182000 00:17:48.860 tsc_hz: 2200000000 (cyc) 00:17:48.860 ====================================== 00:17:48.860 poller_cost: 526 (cyc), 239 (nsec) 00:17:48.860 00:17:48.860 real 0m1.297s 00:17:48.860 user 0m1.134s 00:17:48.860 sys 0m0.057s 00:17:48.860 21:31:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.860 ************************************ 00:17:48.860 END TEST thread_poller_perf 00:17:48.860 ************************************ 00:17:48.860 21:31:09 -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 21:31:09 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:48.860 00:17:48.860 real 0m2.794s 00:17:48.860 user 0m2.355s 00:17:48.860 sys 0m0.218s 00:17:48.860 21:31:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.860 ************************************ 00:17:48.860 END TEST thread 00:17:48.860 ************************************ 00:17:48.860 21:31:09 -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 21:31:09 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:48.860 21:31:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:48.860 21:31:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:48.860 21:31:09 -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 ************************************ 00:17:48.860 START TEST accel 00:17:48.860 ************************************ 00:17:48.860 21:31:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:48.860 * Looking for test storage... 00:17:48.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:17:48.860 21:31:09 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:17:48.860 21:31:09 -- accel/accel.sh@74 -- # get_expected_opcs 00:17:48.860 21:31:09 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:48.860 21:31:09 -- accel/accel.sh@59 -- # spdk_tgt_pid=68122 00:17:48.860 21:31:09 -- accel/accel.sh@60 -- # waitforlisten 68122 00:17:48.860 21:31:09 -- common/autotest_common.sh@819 -- # '[' -z 68122 ']' 00:17:48.860 21:31:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.860 21:31:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.860 21:31:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.860 21:31:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.860 21:31:09 -- common/autotest_common.sh@10 -- # set +x 00:17:48.860 21:31:09 -- accel/accel.sh@58 -- # build_accel_config 00:17:48.860 21:31:09 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:17:48.860 21:31:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:48.860 21:31:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:48.860 21:31:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:48.860 21:31:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:48.860 21:31:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:48.860 21:31:09 -- accel/accel.sh@41 -- # local IFS=, 00:17:48.860 21:31:09 -- accel/accel.sh@42 -- # jq -r . 00:17:48.860 [2024-07-11 21:31:09.661638] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:48.860 [2024-07-11 21:31:09.661738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68122 ] 00:17:48.861 [2024-07-11 21:31:09.796583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.118 [2024-07-11 21:31:09.903522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:49.118 [2024-07-11 21:31:09.903738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.053 21:31:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.053 21:31:10 -- common/autotest_common.sh@852 -- # return 0 00:17:50.053 21:31:10 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:17:50.053 21:31:10 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:17:50.053 21:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.053 21:31:10 -- common/autotest_common.sh@10 -- # set +x 00:17:50.053 21:31:10 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:17:50.053 21:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # IFS== 00:17:50.053 21:31:10 -- accel/accel.sh@64 -- # read -r opc module 00:17:50.053 21:31:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:17:50.053 21:31:10 -- accel/accel.sh@67 -- # killprocess 68122 00:17:50.053 21:31:10 -- common/autotest_common.sh@926 -- # '[' -z 68122 ']' 00:17:50.053 21:31:10 -- common/autotest_common.sh@930 -- # kill -0 68122 00:17:50.053 21:31:10 -- common/autotest_common.sh@931 -- # uname 00:17:50.053 21:31:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.053 21:31:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68122 00:17:50.053 21:31:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:50.053 21:31:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:50.053 killing process with pid 68122 00:17:50.053 21:31:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68122' 00:17:50.053 21:31:10 -- common/autotest_common.sh@945 -- # kill 68122 00:17:50.053 21:31:10 -- common/autotest_common.sh@950 -- # wait 68122 00:17:50.312 21:31:11 -- accel/accel.sh@68 -- # trap - ERR 00:17:50.312 21:31:11 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:17:50.312 21:31:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:50.312 21:31:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:50.312 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.312 21:31:11 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:17:50.312 21:31:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:17:50.312 21:31:11 -- accel/accel.sh@12 -- # build_accel_config 00:17:50.312 21:31:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:50.312 21:31:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:50.312 21:31:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:50.312 21:31:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:50.312 21:31:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:50.312 21:31:11 -- accel/accel.sh@41 -- # local IFS=, 00:17:50.312 21:31:11 -- accel/accel.sh@42 -- # jq -r . 00:17:50.312 21:31:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.312 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.312 21:31:11 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:17:50.312 21:31:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:17:50.312 21:31:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:50.312 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.312 ************************************ 00:17:50.312 START TEST accel_missing_filename 00:17:50.312 ************************************ 00:17:50.312 21:31:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:17:50.312 21:31:11 -- common/autotest_common.sh@640 -- # local es=0 00:17:50.312 21:31:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:17:50.312 21:31:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:17:50.312 21:31:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:50.312 21:31:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:17:50.312 21:31:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:50.312 21:31:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:17:50.312 21:31:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:17:50.312 21:31:11 -- accel/accel.sh@12 -- # build_accel_config 00:17:50.312 21:31:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:50.312 21:31:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:50.312 21:31:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:50.312 21:31:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:50.312 21:31:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:50.312 21:31:11 -- accel/accel.sh@41 -- # local IFS=, 00:17:50.312 21:31:11 -- accel/accel.sh@42 -- # jq -r . 00:17:50.571 [2024-07-11 21:31:11.279981] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:50.571 [2024-07-11 21:31:11.280079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68174 ] 00:17:50.571 [2024-07-11 21:31:11.423020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.830 [2024-07-11 21:31:11.523231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.830 [2024-07-11 21:31:11.579587] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.830 [2024-07-11 21:31:11.655466] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:17:50.830 A filename is required. 00:17:50.830 21:31:11 -- common/autotest_common.sh@643 -- # es=234 00:17:50.830 21:31:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:50.830 21:31:11 -- common/autotest_common.sh@652 -- # es=106 00:17:50.830 21:31:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:17:50.830 21:31:11 -- common/autotest_common.sh@660 -- # es=1 00:17:50.830 21:31:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:50.830 00:17:50.830 real 0m0.480s 00:17:50.830 user 0m0.317s 00:17:50.830 sys 0m0.118s 00:17:50.830 21:31:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.830 ************************************ 00:17:50.830 END TEST accel_missing_filename 00:17:50.830 ************************************ 00:17:50.830 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:17:50.830 21:31:11 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:50.830 21:31:11 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:17:50.830 21:31:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:50.830 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:17:51.103 ************************************ 00:17:51.103 START TEST accel_compress_verify 00:17:51.103 ************************************ 00:17:51.103 21:31:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:51.103 21:31:11 -- common/autotest_common.sh@640 -- # local es=0 00:17:51.103 21:31:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:51.103 21:31:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:17:51.103 21:31:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.103 21:31:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:17:51.103 21:31:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.103 21:31:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:51.103 21:31:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:51.103 21:31:11 -- accel/accel.sh@12 -- # build_accel_config 00:17:51.103 21:31:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:51.103 21:31:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:51.103 21:31:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:51.103 21:31:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:51.103 21:31:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:51.103 21:31:11 -- accel/accel.sh@41 -- # local IFS=, 00:17:51.103 21:31:11 -- accel/accel.sh@42 -- # jq -r . 00:17:51.103 [2024-07-11 21:31:11.813593] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:51.103 [2024-07-11 21:31:11.814353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68198 ] 00:17:51.103 [2024-07-11 21:31:11.950459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.103 [2024-07-11 21:31:12.046714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.361 [2024-07-11 21:31:12.102934] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.361 [2024-07-11 21:31:12.178088] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:17:51.361 00:17:51.361 Compression does not support the verify option, aborting. 00:17:51.361 21:31:12 -- common/autotest_common.sh@643 -- # es=161 00:17:51.361 21:31:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:51.361 21:31:12 -- common/autotest_common.sh@652 -- # es=33 00:17:51.361 21:31:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:17:51.361 21:31:12 -- common/autotest_common.sh@660 -- # es=1 00:17:51.361 21:31:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:51.361 00:17:51.361 real 0m0.470s 00:17:51.361 user 0m0.295s 00:17:51.361 sys 0m0.116s 00:17:51.361 21:31:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.361 21:31:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.361 ************************************ 00:17:51.361 END TEST accel_compress_verify 00:17:51.361 ************************************ 00:17:51.361 21:31:12 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:17:51.361 21:31:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:17:51.361 21:31:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.361 21:31:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.361 ************************************ 00:17:51.361 START TEST accel_wrong_workload 00:17:51.361 ************************************ 00:17:51.361 21:31:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:17:51.361 21:31:12 -- common/autotest_common.sh@640 -- # local es=0 00:17:51.361 21:31:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:17:51.361 21:31:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:17:51.361 21:31:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.361 21:31:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:17:51.361 21:31:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.361 21:31:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:17:51.361 21:31:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:17:51.361 21:31:12 -- accel/accel.sh@12 -- # build_accel_config 00:17:51.361 21:31:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:51.620 21:31:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:51.620 21:31:12 -- accel/accel.sh@41 -- # local IFS=, 00:17:51.620 21:31:12 -- accel/accel.sh@42 -- # jq -r . 00:17:51.620 Unsupported workload type: foobar 00:17:51.620 [2024-07-11 21:31:12.329455] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:17:51.620 accel_perf options: 00:17:51.620 [-h help message] 00:17:51.620 [-q queue depth per core] 00:17:51.620 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:17:51.620 [-T number of threads per core 00:17:51.620 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:17:51.620 [-t time in seconds] 00:17:51.620 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:17:51.620 [ dif_verify, , dif_generate, dif_generate_copy 00:17:51.620 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:17:51.620 [-l for compress/decompress workloads, name of uncompressed input file 00:17:51.620 [-S for crc32c workload, use this seed value (default 0) 00:17:51.620 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:17:51.620 [-f for fill workload, use this BYTE value (default 255) 00:17:51.620 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:17:51.620 [-y verify result if this switch is on] 00:17:51.620 [-a tasks to allocate per core (default: same value as -q)] 00:17:51.620 Can be used to spread operations across a wider range of memory. 00:17:51.620 21:31:12 -- common/autotest_common.sh@643 -- # es=1 00:17:51.620 21:31:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:51.620 21:31:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:51.620 21:31:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:51.620 00:17:51.620 real 0m0.029s 00:17:51.620 user 0m0.011s 00:17:51.620 sys 0m0.018s 00:17:51.620 21:31:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.620 ************************************ 00:17:51.620 END TEST accel_wrong_workload 00:17:51.620 ************************************ 00:17:51.620 21:31:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.620 21:31:12 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:17:51.620 21:31:12 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:17:51.620 21:31:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.620 21:31:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.620 ************************************ 00:17:51.620 START TEST accel_negative_buffers 00:17:51.620 ************************************ 00:17:51.620 21:31:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:17:51.620 21:31:12 -- common/autotest_common.sh@640 -- # local es=0 00:17:51.620 21:31:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:17:51.620 21:31:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:17:51.620 21:31:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.620 21:31:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:17:51.620 21:31:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.620 21:31:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:17:51.620 21:31:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:17:51.620 21:31:12 -- accel/accel.sh@12 -- # build_accel_config 00:17:51.620 21:31:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:51.620 21:31:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:51.620 21:31:12 -- accel/accel.sh@41 -- # local IFS=, 00:17:51.620 21:31:12 -- accel/accel.sh@42 -- # jq -r . 00:17:51.620 -x option must be non-negative. 00:17:51.620 [2024-07-11 21:31:12.406021] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:17:51.620 accel_perf options: 00:17:51.620 [-h help message] 00:17:51.620 [-q queue depth per core] 00:17:51.620 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:17:51.620 [-T number of threads per core 00:17:51.620 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:17:51.620 [-t time in seconds] 00:17:51.620 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:17:51.620 [ dif_verify, , dif_generate, dif_generate_copy 00:17:51.620 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:17:51.620 [-l for compress/decompress workloads, name of uncompressed input file 00:17:51.620 [-S for crc32c workload, use this seed value (default 0) 00:17:51.620 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:17:51.620 [-f for fill workload, use this BYTE value (default 255) 00:17:51.620 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:17:51.620 [-y verify result if this switch is on] 00:17:51.620 [-a tasks to allocate per core (default: same value as -q)] 00:17:51.620 Can be used to spread operations across a wider range of memory. 00:17:51.620 21:31:12 -- common/autotest_common.sh@643 -- # es=1 00:17:51.620 21:31:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:51.620 ************************************ 00:17:51.620 END TEST accel_negative_buffers 00:17:51.620 ************************************ 00:17:51.620 21:31:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:51.620 21:31:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:51.620 00:17:51.620 real 0m0.033s 00:17:51.620 user 0m0.015s 00:17:51.620 sys 0m0.017s 00:17:51.620 21:31:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.620 21:31:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.620 21:31:12 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:17:51.620 21:31:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:17:51.620 21:31:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.620 21:31:12 -- common/autotest_common.sh@10 -- # set +x 00:17:51.620 ************************************ 00:17:51.620 START TEST accel_crc32c 00:17:51.620 ************************************ 00:17:51.620 21:31:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:17:51.620 21:31:12 -- accel/accel.sh@16 -- # local accel_opc 00:17:51.620 21:31:12 -- accel/accel.sh@17 -- # local accel_module 00:17:51.620 21:31:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:17:51.620 21:31:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:17:51.620 21:31:12 -- accel/accel.sh@12 -- # build_accel_config 00:17:51.620 21:31:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:51.620 21:31:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:51.620 21:31:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:51.620 21:31:12 -- accel/accel.sh@41 -- # local IFS=, 00:17:51.620 21:31:12 -- accel/accel.sh@42 -- # jq -r . 00:17:51.620 [2024-07-11 21:31:12.481230] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:51.620 [2024-07-11 21:31:12.481315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68257 ] 00:17:51.879 [2024-07-11 21:31:12.617976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.879 [2024-07-11 21:31:12.715733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.252 21:31:13 -- accel/accel.sh@18 -- # out=' 00:17:53.252 SPDK Configuration: 00:17:53.252 Core mask: 0x1 00:17:53.252 00:17:53.252 Accel Perf Configuration: 00:17:53.252 Workload Type: crc32c 00:17:53.252 CRC-32C seed: 32 00:17:53.252 Transfer size: 4096 bytes 00:17:53.252 Vector count 1 00:17:53.252 Module: software 00:17:53.252 Queue depth: 32 00:17:53.252 Allocate depth: 32 00:17:53.252 # threads/core: 1 00:17:53.252 Run time: 1 seconds 00:17:53.252 Verify: Yes 00:17:53.252 00:17:53.252 Running for 1 seconds... 00:17:53.252 00:17:53.252 Core,Thread Transfers Bandwidth Failed Miscompares 00:17:53.252 ------------------------------------------------------------------------------------ 00:17:53.252 0,0 432960/s 1691 MiB/s 0 0 00:17:53.252 ==================================================================================== 00:17:53.252 Total 432960/s 1691 MiB/s 0 0' 00:17:53.252 21:31:13 -- accel/accel.sh@20 -- # IFS=: 00:17:53.252 21:31:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:17:53.253 21:31:13 -- accel/accel.sh@20 -- # read -r var val 00:17:53.253 21:31:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:17:53.253 21:31:13 -- accel/accel.sh@12 -- # build_accel_config 00:17:53.253 21:31:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:53.253 21:31:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:53.253 21:31:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:53.253 21:31:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:53.253 21:31:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:53.253 21:31:13 -- accel/accel.sh@41 -- # local IFS=, 00:17:53.253 21:31:13 -- accel/accel.sh@42 -- # jq -r . 00:17:53.253 [2024-07-11 21:31:13.935575] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:53.253 [2024-07-11 21:31:13.935659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68277 ] 00:17:53.253 [2024-07-11 21:31:14.068625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.253 [2024-07-11 21:31:14.160788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val= 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val= 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val=0x1 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val= 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val= 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val=crc32c 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val=32 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val= 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.509 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.509 21:31:14 -- accel/accel.sh@21 -- # val=software 00:17:53.509 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@23 -- # accel_module=software 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.510 21:31:14 -- accel/accel.sh@21 -- # val=32 00:17:53.510 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.510 21:31:14 -- accel/accel.sh@21 -- # val=32 00:17:53.510 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.510 21:31:14 -- accel/accel.sh@21 -- # val=1 00:17:53.510 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.510 21:31:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:17:53.510 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.510 21:31:14 -- accel/accel.sh@21 -- # val=Yes 00:17:53.510 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.510 21:31:14 -- accel/accel.sh@21 -- # val= 00:17:53.510 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:53.510 21:31:14 -- accel/accel.sh@21 -- # val= 00:17:53.510 21:31:14 -- accel/accel.sh@22 -- # case "$var" in 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # IFS=: 00:17:53.510 21:31:14 -- accel/accel.sh@20 -- # read -r var val 00:17:54.442 21:31:15 -- accel/accel.sh@21 -- # val= 00:17:54.442 21:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # IFS=: 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # read -r var val 00:17:54.442 21:31:15 -- accel/accel.sh@21 -- # val= 00:17:54.442 21:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # IFS=: 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # read -r var val 00:17:54.442 21:31:15 -- accel/accel.sh@21 -- # val= 00:17:54.442 21:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # IFS=: 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # read -r var val 00:17:54.442 21:31:15 -- accel/accel.sh@21 -- # val= 00:17:54.442 21:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # IFS=: 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # read -r var val 00:17:54.442 21:31:15 -- accel/accel.sh@21 -- # val= 00:17:54.442 21:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # IFS=: 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # read -r var val 00:17:54.442 21:31:15 -- accel/accel.sh@21 -- # val= 00:17:54.442 21:31:15 -- accel/accel.sh@22 -- # case "$var" in 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # IFS=: 00:17:54.442 21:31:15 -- accel/accel.sh@20 -- # read -r var val 00:17:54.442 21:31:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:17:54.442 21:31:15 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:17:54.442 21:31:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:54.442 00:17:54.442 real 0m2.904s 00:17:54.442 user 0m2.487s 00:17:54.442 sys 0m0.212s 00:17:54.442 21:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.442 21:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.442 ************************************ 00:17:54.442 END TEST accel_crc32c 00:17:54.442 ************************************ 00:17:54.700 21:31:15 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:17:54.700 21:31:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:17:54.700 21:31:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:54.700 21:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.700 ************************************ 00:17:54.700 START TEST accel_crc32c_C2 00:17:54.700 ************************************ 00:17:54.700 21:31:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:17:54.700 21:31:15 -- accel/accel.sh@16 -- # local accel_opc 00:17:54.700 21:31:15 -- accel/accel.sh@17 -- # local accel_module 00:17:54.700 21:31:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:17:54.700 21:31:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:17:54.700 21:31:15 -- accel/accel.sh@12 -- # build_accel_config 00:17:54.700 21:31:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:54.700 21:31:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:54.700 21:31:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:54.700 21:31:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:54.700 21:31:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:54.700 21:31:15 -- accel/accel.sh@41 -- # local IFS=, 00:17:54.700 21:31:15 -- accel/accel.sh@42 -- # jq -r . 00:17:54.700 [2024-07-11 21:31:15.437005] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:54.700 [2024-07-11 21:31:15.437080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68312 ] 00:17:54.700 [2024-07-11 21:31:15.574366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.958 [2024-07-11 21:31:15.671013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.332 21:31:16 -- accel/accel.sh@18 -- # out=' 00:17:56.332 SPDK Configuration: 00:17:56.332 Core mask: 0x1 00:17:56.332 00:17:56.332 Accel Perf Configuration: 00:17:56.332 Workload Type: crc32c 00:17:56.332 CRC-32C seed: 0 00:17:56.332 Transfer size: 4096 bytes 00:17:56.332 Vector count 2 00:17:56.332 Module: software 00:17:56.332 Queue depth: 32 00:17:56.332 Allocate depth: 32 00:17:56.332 # threads/core: 1 00:17:56.332 Run time: 1 seconds 00:17:56.332 Verify: Yes 00:17:56.332 00:17:56.332 Running for 1 seconds... 00:17:56.332 00:17:56.332 Core,Thread Transfers Bandwidth Failed Miscompares 00:17:56.332 ------------------------------------------------------------------------------------ 00:17:56.332 0,0 339904/s 2655 MiB/s 0 0 00:17:56.332 ==================================================================================== 00:17:56.332 Total 339904/s 1327 MiB/s 0 0' 00:17:56.332 21:31:16 -- accel/accel.sh@20 -- # IFS=: 00:17:56.332 21:31:16 -- accel/accel.sh@20 -- # read -r var val 00:17:56.332 21:31:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:17:56.332 21:31:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:17:56.332 21:31:16 -- accel/accel.sh@12 -- # build_accel_config 00:17:56.332 21:31:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:56.332 21:31:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:56.332 21:31:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:56.332 21:31:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:56.332 21:31:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:56.332 21:31:16 -- accel/accel.sh@41 -- # local IFS=, 00:17:56.332 21:31:16 -- accel/accel.sh@42 -- # jq -r . 00:17:56.332 [2024-07-11 21:31:16.903268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:56.332 [2024-07-11 21:31:16.903355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68331 ] 00:17:56.332 [2024-07-11 21:31:17.041826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.332 [2024-07-11 21:31:17.133552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.332 21:31:17 -- accel/accel.sh@21 -- # val= 00:17:56.332 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.332 21:31:17 -- accel/accel.sh@21 -- # val= 00:17:56.332 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.332 21:31:17 -- accel/accel.sh@21 -- # val=0x1 00:17:56.332 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.332 21:31:17 -- accel/accel.sh@21 -- # val= 00:17:56.332 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.332 21:31:17 -- accel/accel.sh@21 -- # val= 00:17:56.332 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.332 21:31:17 -- accel/accel.sh@21 -- # val=crc32c 00:17:56.332 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.332 21:31:17 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.332 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.332 21:31:17 -- accel/accel.sh@21 -- # val=0 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val= 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val=software 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@23 -- # accel_module=software 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val=32 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val=32 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val=1 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val=Yes 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val= 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:56.333 21:31:17 -- accel/accel.sh@21 -- # val= 00:17:56.333 21:31:17 -- accel/accel.sh@22 -- # case "$var" in 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # IFS=: 00:17:56.333 21:31:17 -- accel/accel.sh@20 -- # read -r var val 00:17:57.708 21:31:18 -- accel/accel.sh@21 -- # val= 00:17:57.708 21:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # IFS=: 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # read -r var val 00:17:57.708 21:31:18 -- accel/accel.sh@21 -- # val= 00:17:57.708 21:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # IFS=: 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # read -r var val 00:17:57.708 21:31:18 -- accel/accel.sh@21 -- # val= 00:17:57.708 21:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # IFS=: 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # read -r var val 00:17:57.708 21:31:18 -- accel/accel.sh@21 -- # val= 00:17:57.708 21:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # IFS=: 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # read -r var val 00:17:57.708 ************************************ 00:17:57.708 END TEST accel_crc32c_C2 00:17:57.708 ************************************ 00:17:57.708 21:31:18 -- accel/accel.sh@21 -- # val= 00:17:57.708 21:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # IFS=: 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # read -r var val 00:17:57.708 21:31:18 -- accel/accel.sh@21 -- # val= 00:17:57.708 21:31:18 -- accel/accel.sh@22 -- # case "$var" in 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # IFS=: 00:17:57.708 21:31:18 -- accel/accel.sh@20 -- # read -r var val 00:17:57.708 21:31:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:17:57.708 21:31:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:17:57.708 21:31:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.708 00:17:57.708 real 0m2.945s 00:17:57.708 user 0m2.513s 00:17:57.708 sys 0m0.225s 00:17:57.708 21:31:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.708 21:31:18 -- common/autotest_common.sh@10 -- # set +x 00:17:57.708 21:31:18 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:17:57.708 21:31:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:17:57.708 21:31:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:57.708 21:31:18 -- common/autotest_common.sh@10 -- # set +x 00:17:57.708 ************************************ 00:17:57.708 START TEST accel_copy 00:17:57.708 ************************************ 00:17:57.708 21:31:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:17:57.708 21:31:18 -- accel/accel.sh@16 -- # local accel_opc 00:17:57.708 21:31:18 -- accel/accel.sh@17 -- # local accel_module 00:17:57.708 21:31:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:17:57.708 21:31:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:17:57.708 21:31:18 -- accel/accel.sh@12 -- # build_accel_config 00:17:57.708 21:31:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:57.708 21:31:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:57.708 21:31:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:57.708 21:31:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:57.708 21:31:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:57.708 21:31:18 -- accel/accel.sh@41 -- # local IFS=, 00:17:57.708 21:31:18 -- accel/accel.sh@42 -- # jq -r . 00:17:57.708 [2024-07-11 21:31:18.431548] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:57.708 [2024-07-11 21:31:18.431631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68366 ] 00:17:57.708 [2024-07-11 21:31:18.565820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.708 [2024-07-11 21:31:18.647665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.085 21:31:19 -- accel/accel.sh@18 -- # out=' 00:17:59.085 SPDK Configuration: 00:17:59.085 Core mask: 0x1 00:17:59.085 00:17:59.085 Accel Perf Configuration: 00:17:59.085 Workload Type: copy 00:17:59.085 Transfer size: 4096 bytes 00:17:59.085 Vector count 1 00:17:59.085 Module: software 00:17:59.085 Queue depth: 32 00:17:59.085 Allocate depth: 32 00:17:59.085 # threads/core: 1 00:17:59.085 Run time: 1 seconds 00:17:59.085 Verify: Yes 00:17:59.085 00:17:59.085 Running for 1 seconds... 00:17:59.085 00:17:59.085 Core,Thread Transfers Bandwidth Failed Miscompares 00:17:59.085 ------------------------------------------------------------------------------------ 00:17:59.085 0,0 303872/s 1187 MiB/s 0 0 00:17:59.085 ==================================================================================== 00:17:59.085 Total 303872/s 1187 MiB/s 0 0' 00:17:59.085 21:31:19 -- accel/accel.sh@20 -- # IFS=: 00:17:59.085 21:31:19 -- accel/accel.sh@20 -- # read -r var val 00:17:59.085 21:31:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:17:59.085 21:31:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:17:59.085 21:31:19 -- accel/accel.sh@12 -- # build_accel_config 00:17:59.085 21:31:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:17:59.085 21:31:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:59.085 21:31:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:59.085 21:31:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:17:59.085 21:31:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:17:59.085 21:31:19 -- accel/accel.sh@41 -- # local IFS=, 00:17:59.085 21:31:19 -- accel/accel.sh@42 -- # jq -r . 00:17:59.085 [2024-07-11 21:31:19.870910] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:59.085 [2024-07-11 21:31:19.870997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68380 ] 00:17:59.085 [2024-07-11 21:31:20.007845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.343 [2024-07-11 21:31:20.102451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.343 21:31:20 -- accel/accel.sh@21 -- # val= 00:17:59.343 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.343 21:31:20 -- accel/accel.sh@21 -- # val= 00:17:59.343 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.343 21:31:20 -- accel/accel.sh@21 -- # val=0x1 00:17:59.343 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.343 21:31:20 -- accel/accel.sh@21 -- # val= 00:17:59.343 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.343 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val= 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val=copy 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@24 -- # accel_opc=copy 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val= 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val=software 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@23 -- # accel_module=software 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val=32 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val=32 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val=1 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val=Yes 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val= 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:17:59.344 21:31:20 -- accel/accel.sh@21 -- # val= 00:17:59.344 21:31:20 -- accel/accel.sh@22 -- # case "$var" in 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # IFS=: 00:17:59.344 21:31:20 -- accel/accel.sh@20 -- # read -r var val 00:18:00.720 21:31:21 -- accel/accel.sh@21 -- # val= 00:18:00.720 21:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # IFS=: 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # read -r var val 00:18:00.720 21:31:21 -- accel/accel.sh@21 -- # val= 00:18:00.720 21:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # IFS=: 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # read -r var val 00:18:00.720 21:31:21 -- accel/accel.sh@21 -- # val= 00:18:00.720 21:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # IFS=: 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # read -r var val 00:18:00.720 21:31:21 -- accel/accel.sh@21 -- # val= 00:18:00.720 21:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # IFS=: 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # read -r var val 00:18:00.720 21:31:21 -- accel/accel.sh@21 -- # val= 00:18:00.720 21:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # IFS=: 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # read -r var val 00:18:00.720 21:31:21 -- accel/accel.sh@21 -- # val= 00:18:00.720 21:31:21 -- accel/accel.sh@22 -- # case "$var" in 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # IFS=: 00:18:00.720 21:31:21 -- accel/accel.sh@20 -- # read -r var val 00:18:00.720 21:31:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:00.720 21:31:21 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:18:00.720 21:31:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:00.720 00:18:00.720 real 0m2.899s 00:18:00.720 user 0m2.480s 00:18:00.720 sys 0m0.215s 00:18:00.720 21:31:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.720 ************************************ 00:18:00.720 END TEST accel_copy 00:18:00.720 ************************************ 00:18:00.720 21:31:21 -- common/autotest_common.sh@10 -- # set +x 00:18:00.720 21:31:21 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:00.720 21:31:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:18:00.720 21:31:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:00.720 21:31:21 -- common/autotest_common.sh@10 -- # set +x 00:18:00.720 ************************************ 00:18:00.720 START TEST accel_fill 00:18:00.720 ************************************ 00:18:00.720 21:31:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:00.720 21:31:21 -- accel/accel.sh@16 -- # local accel_opc 00:18:00.720 21:31:21 -- accel/accel.sh@17 -- # local accel_module 00:18:00.720 21:31:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:00.720 21:31:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:00.720 21:31:21 -- accel/accel.sh@12 -- # build_accel_config 00:18:00.720 21:31:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:00.720 21:31:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:00.720 21:31:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:00.720 21:31:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:00.720 21:31:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:00.720 21:31:21 -- accel/accel.sh@41 -- # local IFS=, 00:18:00.720 21:31:21 -- accel/accel.sh@42 -- # jq -r . 00:18:00.720 [2024-07-11 21:31:21.377952] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:00.720 [2024-07-11 21:31:21.378050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68420 ] 00:18:00.720 [2024-07-11 21:31:21.519799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.720 [2024-07-11 21:31:21.612520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.091 21:31:22 -- accel/accel.sh@18 -- # out=' 00:18:02.091 SPDK Configuration: 00:18:02.091 Core mask: 0x1 00:18:02.091 00:18:02.091 Accel Perf Configuration: 00:18:02.091 Workload Type: fill 00:18:02.091 Fill pattern: 0x80 00:18:02.091 Transfer size: 4096 bytes 00:18:02.091 Vector count 1 00:18:02.091 Module: software 00:18:02.091 Queue depth: 64 00:18:02.092 Allocate depth: 64 00:18:02.092 # threads/core: 1 00:18:02.092 Run time: 1 seconds 00:18:02.092 Verify: Yes 00:18:02.092 00:18:02.092 Running for 1 seconds... 00:18:02.092 00:18:02.092 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:02.092 ------------------------------------------------------------------------------------ 00:18:02.092 0,0 459136/s 1793 MiB/s 0 0 00:18:02.092 ==================================================================================== 00:18:02.092 Total 459136/s 1793 MiB/s 0 0' 00:18:02.092 21:31:22 -- accel/accel.sh@20 -- # IFS=: 00:18:02.092 21:31:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:02.092 21:31:22 -- accel/accel.sh@20 -- # read -r var val 00:18:02.092 21:31:22 -- accel/accel.sh@12 -- # build_accel_config 00:18:02.092 21:31:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:02.092 21:31:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:02.092 21:31:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:02.092 21:31:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:02.092 21:31:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:02.092 21:31:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:02.092 21:31:22 -- accel/accel.sh@41 -- # local IFS=, 00:18:02.092 21:31:22 -- accel/accel.sh@42 -- # jq -r . 00:18:02.092 [2024-07-11 21:31:22.835431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:02.092 [2024-07-11 21:31:22.835546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68434 ] 00:18:02.092 [2024-07-11 21:31:22.975625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.349 [2024-07-11 21:31:23.060854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val= 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val= 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=0x1 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val= 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val= 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=fill 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@24 -- # accel_opc=fill 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=0x80 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val= 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=software 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@23 -- # accel_module=software 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=64 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=64 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=1 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val=Yes 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val= 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:02.349 21:31:23 -- accel/accel.sh@21 -- # val= 00:18:02.349 21:31:23 -- accel/accel.sh@22 -- # case "$var" in 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # IFS=: 00:18:02.349 21:31:23 -- accel/accel.sh@20 -- # read -r var val 00:18:03.721 21:31:24 -- accel/accel.sh@21 -- # val= 00:18:03.721 21:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # IFS=: 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # read -r var val 00:18:03.721 21:31:24 -- accel/accel.sh@21 -- # val= 00:18:03.721 21:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # IFS=: 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # read -r var val 00:18:03.721 21:31:24 -- accel/accel.sh@21 -- # val= 00:18:03.721 21:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # IFS=: 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # read -r var val 00:18:03.721 21:31:24 -- accel/accel.sh@21 -- # val= 00:18:03.721 21:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # IFS=: 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # read -r var val 00:18:03.721 21:31:24 -- accel/accel.sh@21 -- # val= 00:18:03.721 21:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # IFS=: 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # read -r var val 00:18:03.721 21:31:24 -- accel/accel.sh@21 -- # val= 00:18:03.721 ************************************ 00:18:03.721 END TEST accel_fill 00:18:03.721 ************************************ 00:18:03.721 21:31:24 -- accel/accel.sh@22 -- # case "$var" in 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # IFS=: 00:18:03.721 21:31:24 -- accel/accel.sh@20 -- # read -r var val 00:18:03.721 21:31:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:03.721 21:31:24 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:18:03.721 21:31:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:03.721 00:18:03.721 real 0m2.909s 00:18:03.721 user 0m2.483s 00:18:03.721 sys 0m0.226s 00:18:03.721 21:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.721 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:18:03.721 21:31:24 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:18:03.721 21:31:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:18:03.721 21:31:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.721 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:18:03.721 ************************************ 00:18:03.721 START TEST accel_copy_crc32c 00:18:03.721 ************************************ 00:18:03.721 21:31:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:18:03.721 21:31:24 -- accel/accel.sh@16 -- # local accel_opc 00:18:03.721 21:31:24 -- accel/accel.sh@17 -- # local accel_module 00:18:03.721 21:31:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:18:03.721 21:31:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:18:03.721 21:31:24 -- accel/accel.sh@12 -- # build_accel_config 00:18:03.721 21:31:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:03.721 21:31:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:03.721 21:31:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:03.721 21:31:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:03.721 21:31:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:03.721 21:31:24 -- accel/accel.sh@41 -- # local IFS=, 00:18:03.721 21:31:24 -- accel/accel.sh@42 -- # jq -r . 00:18:03.721 [2024-07-11 21:31:24.338596] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:03.721 [2024-07-11 21:31:24.338723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68468 ] 00:18:03.721 [2024-07-11 21:31:24.474620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.721 [2024-07-11 21:31:24.573029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.095 21:31:25 -- accel/accel.sh@18 -- # out=' 00:18:05.095 SPDK Configuration: 00:18:05.095 Core mask: 0x1 00:18:05.095 00:18:05.095 Accel Perf Configuration: 00:18:05.095 Workload Type: copy_crc32c 00:18:05.095 CRC-32C seed: 0 00:18:05.095 Vector size: 4096 bytes 00:18:05.095 Transfer size: 4096 bytes 00:18:05.095 Vector count 1 00:18:05.095 Module: software 00:18:05.095 Queue depth: 32 00:18:05.095 Allocate depth: 32 00:18:05.095 # threads/core: 1 00:18:05.095 Run time: 1 seconds 00:18:05.095 Verify: Yes 00:18:05.095 00:18:05.095 Running for 1 seconds... 00:18:05.095 00:18:05.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:05.095 ------------------------------------------------------------------------------------ 00:18:05.095 0,0 236256/s 922 MiB/s 0 0 00:18:05.095 ==================================================================================== 00:18:05.095 Total 236256/s 922 MiB/s 0 0' 00:18:05.095 21:31:25 -- accel/accel.sh@20 -- # IFS=: 00:18:05.095 21:31:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:18:05.095 21:31:25 -- accel/accel.sh@20 -- # read -r var val 00:18:05.095 21:31:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:18:05.095 21:31:25 -- accel/accel.sh@12 -- # build_accel_config 00:18:05.095 21:31:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:05.095 21:31:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:05.095 21:31:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:05.095 21:31:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:05.095 21:31:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:05.095 21:31:25 -- accel/accel.sh@41 -- # local IFS=, 00:18:05.095 21:31:25 -- accel/accel.sh@42 -- # jq -r . 00:18:05.095 [2024-07-11 21:31:25.804749] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:05.095 [2024-07-11 21:31:25.804859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68488 ] 00:18:05.095 [2024-07-11 21:31:25.940225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.095 [2024-07-11 21:31:26.035998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val= 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val= 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=0x1 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val= 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val= 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=0 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val= 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=software 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@23 -- # accel_module=software 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=32 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=32 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=1 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val=Yes 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val= 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:05.354 21:31:26 -- accel/accel.sh@21 -- # val= 00:18:05.354 21:31:26 -- accel/accel.sh@22 -- # case "$var" in 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # IFS=: 00:18:05.354 21:31:26 -- accel/accel.sh@20 -- # read -r var val 00:18:06.725 21:31:27 -- accel/accel.sh@21 -- # val= 00:18:06.725 21:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # IFS=: 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # read -r var val 00:18:06.725 21:31:27 -- accel/accel.sh@21 -- # val= 00:18:06.725 21:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # IFS=: 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # read -r var val 00:18:06.725 21:31:27 -- accel/accel.sh@21 -- # val= 00:18:06.725 21:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # IFS=: 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # read -r var val 00:18:06.725 21:31:27 -- accel/accel.sh@21 -- # val= 00:18:06.725 21:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # IFS=: 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # read -r var val 00:18:06.725 21:31:27 -- accel/accel.sh@21 -- # val= 00:18:06.725 21:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # IFS=: 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # read -r var val 00:18:06.725 21:31:27 -- accel/accel.sh@21 -- # val= 00:18:06.725 21:31:27 -- accel/accel.sh@22 -- # case "$var" in 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # IFS=: 00:18:06.725 21:31:27 -- accel/accel.sh@20 -- # read -r var val 00:18:06.725 ************************************ 00:18:06.725 END TEST accel_copy_crc32c 00:18:06.725 ************************************ 00:18:06.725 21:31:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:06.725 21:31:27 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:18:06.725 21:31:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:06.725 00:18:06.725 real 0m2.933s 00:18:06.725 user 0m2.493s 00:18:06.725 sys 0m0.233s 00:18:06.725 21:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:06.725 21:31:27 -- common/autotest_common.sh@10 -- # set +x 00:18:06.725 21:31:27 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:18:06.725 21:31:27 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:18:06.725 21:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:06.725 21:31:27 -- common/autotest_common.sh@10 -- # set +x 00:18:06.725 ************************************ 00:18:06.725 START TEST accel_copy_crc32c_C2 00:18:06.725 ************************************ 00:18:06.725 21:31:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:18:06.725 21:31:27 -- accel/accel.sh@16 -- # local accel_opc 00:18:06.725 21:31:27 -- accel/accel.sh@17 -- # local accel_module 00:18:06.725 21:31:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:18:06.725 21:31:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:18:06.725 21:31:27 -- accel/accel.sh@12 -- # build_accel_config 00:18:06.725 21:31:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:06.725 21:31:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:06.725 21:31:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:06.725 21:31:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:06.725 21:31:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:06.725 21:31:27 -- accel/accel.sh@41 -- # local IFS=, 00:18:06.725 21:31:27 -- accel/accel.sh@42 -- # jq -r . 00:18:06.725 [2024-07-11 21:31:27.326125] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:06.725 [2024-07-11 21:31:27.326265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68522 ] 00:18:06.725 [2024-07-11 21:31:27.463623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.725 [2024-07-11 21:31:27.566641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.096 21:31:28 -- accel/accel.sh@18 -- # out=' 00:18:08.096 SPDK Configuration: 00:18:08.096 Core mask: 0x1 00:18:08.096 00:18:08.096 Accel Perf Configuration: 00:18:08.096 Workload Type: copy_crc32c 00:18:08.096 CRC-32C seed: 0 00:18:08.096 Vector size: 4096 bytes 00:18:08.096 Transfer size: 8192 bytes 00:18:08.096 Vector count 2 00:18:08.096 Module: software 00:18:08.096 Queue depth: 32 00:18:08.096 Allocate depth: 32 00:18:08.096 # threads/core: 1 00:18:08.096 Run time: 1 seconds 00:18:08.096 Verify: Yes 00:18:08.096 00:18:08.096 Running for 1 seconds... 00:18:08.096 00:18:08.096 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:08.096 ------------------------------------------------------------------------------------ 00:18:08.096 0,0 175456/s 1370 MiB/s 0 0 00:18:08.096 ==================================================================================== 00:18:08.096 Total 175456/s 685 MiB/s 0 0' 00:18:08.096 21:31:28 -- accel/accel.sh@20 -- # IFS=: 00:18:08.096 21:31:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:18:08.096 21:31:28 -- accel/accel.sh@20 -- # read -r var val 00:18:08.096 21:31:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:18:08.096 21:31:28 -- accel/accel.sh@12 -- # build_accel_config 00:18:08.096 21:31:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:08.096 21:31:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:08.096 21:31:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:08.096 21:31:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:08.096 21:31:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:08.096 21:31:28 -- accel/accel.sh@41 -- # local IFS=, 00:18:08.096 21:31:28 -- accel/accel.sh@42 -- # jq -r . 00:18:08.096 [2024-07-11 21:31:28.795991] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:08.097 [2024-07-11 21:31:28.796093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68542 ] 00:18:08.097 [2024-07-11 21:31:28.927469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.097 [2024-07-11 21:31:29.023328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val= 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val= 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=0x1 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val= 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val= 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=copy_crc32c 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=0 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val='8192 bytes' 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val= 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=software 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@23 -- # accel_module=software 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=32 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=32 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=1 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val=Yes 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val= 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:08.355 21:31:29 -- accel/accel.sh@21 -- # val= 00:18:08.355 21:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # IFS=: 00:18:08.355 21:31:29 -- accel/accel.sh@20 -- # read -r var val 00:18:09.287 21:31:30 -- accel/accel.sh@21 -- # val= 00:18:09.287 21:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # IFS=: 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # read -r var val 00:18:09.287 21:31:30 -- accel/accel.sh@21 -- # val= 00:18:09.287 21:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # IFS=: 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # read -r var val 00:18:09.287 21:31:30 -- accel/accel.sh@21 -- # val= 00:18:09.287 21:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # IFS=: 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # read -r var val 00:18:09.287 21:31:30 -- accel/accel.sh@21 -- # val= 00:18:09.287 21:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # IFS=: 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # read -r var val 00:18:09.287 21:31:30 -- accel/accel.sh@21 -- # val= 00:18:09.287 21:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # IFS=: 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # read -r var val 00:18:09.287 21:31:30 -- accel/accel.sh@21 -- # val= 00:18:09.287 21:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # IFS=: 00:18:09.287 21:31:30 -- accel/accel.sh@20 -- # read -r var val 00:18:09.287 21:31:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:09.287 21:31:30 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:18:09.544 ************************************ 00:18:09.544 END TEST accel_copy_crc32c_C2 00:18:09.544 ************************************ 00:18:09.544 21:31:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:09.544 00:18:09.544 real 0m2.935s 00:18:09.544 user 0m2.507s 00:18:09.544 sys 0m0.220s 00:18:09.544 21:31:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.544 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:18:09.544 21:31:30 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:18:09.544 21:31:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:18:09.544 21:31:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.544 21:31:30 -- common/autotest_common.sh@10 -- # set +x 00:18:09.544 ************************************ 00:18:09.544 START TEST accel_dualcast 00:18:09.544 ************************************ 00:18:09.544 21:31:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:18:09.544 21:31:30 -- accel/accel.sh@16 -- # local accel_opc 00:18:09.544 21:31:30 -- accel/accel.sh@17 -- # local accel_module 00:18:09.544 21:31:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:18:09.544 21:31:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:18:09.544 21:31:30 -- accel/accel.sh@12 -- # build_accel_config 00:18:09.544 21:31:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:09.544 21:31:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:09.544 21:31:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:09.544 21:31:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:09.544 21:31:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:09.544 21:31:30 -- accel/accel.sh@41 -- # local IFS=, 00:18:09.544 21:31:30 -- accel/accel.sh@42 -- # jq -r . 00:18:09.544 [2024-07-11 21:31:30.302318] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:09.544 [2024-07-11 21:31:30.302457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68571 ] 00:18:09.544 [2024-07-11 21:31:30.440117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.800 [2024-07-11 21:31:30.535419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.176 21:31:31 -- accel/accel.sh@18 -- # out=' 00:18:11.176 SPDK Configuration: 00:18:11.176 Core mask: 0x1 00:18:11.176 00:18:11.176 Accel Perf Configuration: 00:18:11.176 Workload Type: dualcast 00:18:11.176 Transfer size: 4096 bytes 00:18:11.176 Vector count 1 00:18:11.176 Module: software 00:18:11.176 Queue depth: 32 00:18:11.176 Allocate depth: 32 00:18:11.176 # threads/core: 1 00:18:11.176 Run time: 1 seconds 00:18:11.176 Verify: Yes 00:18:11.176 00:18:11.176 Running for 1 seconds... 00:18:11.176 00:18:11.176 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:11.176 ------------------------------------------------------------------------------------ 00:18:11.176 0,0 339552/s 1326 MiB/s 0 0 00:18:11.176 ==================================================================================== 00:18:11.176 Total 339552/s 1326 MiB/s 0 0' 00:18:11.176 21:31:31 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:18:11.176 21:31:31 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:18:11.176 21:31:31 -- accel/accel.sh@12 -- # build_accel_config 00:18:11.176 21:31:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:11.176 21:31:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:11.176 21:31:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:11.176 21:31:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:11.176 21:31:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:11.176 21:31:31 -- accel/accel.sh@41 -- # local IFS=, 00:18:11.176 21:31:31 -- accel/accel.sh@42 -- # jq -r . 00:18:11.176 [2024-07-11 21:31:31.780377] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:11.176 [2024-07-11 21:31:31.780501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68596 ] 00:18:11.176 [2024-07-11 21:31:31.914689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.176 [2024-07-11 21:31:32.022278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val= 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val= 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val=0x1 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val= 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val= 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val=dualcast 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val= 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val=software 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@23 -- # accel_module=software 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val=32 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val=32 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val=1 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val=Yes 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val= 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:11.176 21:31:32 -- accel/accel.sh@21 -- # val= 00:18:11.176 21:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # IFS=: 00:18:11.176 21:31:32 -- accel/accel.sh@20 -- # read -r var val 00:18:12.551 21:31:33 -- accel/accel.sh@21 -- # val= 00:18:12.551 21:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # IFS=: 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # read -r var val 00:18:12.551 21:31:33 -- accel/accel.sh@21 -- # val= 00:18:12.551 21:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # IFS=: 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # read -r var val 00:18:12.551 21:31:33 -- accel/accel.sh@21 -- # val= 00:18:12.551 21:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # IFS=: 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # read -r var val 00:18:12.551 21:31:33 -- accel/accel.sh@21 -- # val= 00:18:12.551 21:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # IFS=: 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # read -r var val 00:18:12.551 21:31:33 -- accel/accel.sh@21 -- # val= 00:18:12.551 21:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # IFS=: 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # read -r var val 00:18:12.551 21:31:33 -- accel/accel.sh@21 -- # val= 00:18:12.551 21:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # IFS=: 00:18:12.551 21:31:33 -- accel/accel.sh@20 -- # read -r var val 00:18:12.551 21:31:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:12.551 21:31:33 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:18:12.551 21:31:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:12.551 00:18:12.551 real 0m2.968s 00:18:12.551 user 0m2.513s 00:18:12.551 sys 0m0.246s 00:18:12.551 21:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.551 ************************************ 00:18:12.551 END TEST accel_dualcast 00:18:12.551 ************************************ 00:18:12.551 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:18:12.551 21:31:33 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:18:12.551 21:31:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:18:12.551 21:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:12.551 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:18:12.551 ************************************ 00:18:12.551 START TEST accel_compare 00:18:12.551 ************************************ 00:18:12.551 21:31:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:18:12.551 21:31:33 -- accel/accel.sh@16 -- # local accel_opc 00:18:12.551 21:31:33 -- accel/accel.sh@17 -- # local accel_module 00:18:12.551 21:31:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:18:12.551 21:31:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:18:12.551 21:31:33 -- accel/accel.sh@12 -- # build_accel_config 00:18:12.551 21:31:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:12.551 21:31:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:12.551 21:31:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:12.551 21:31:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:12.551 21:31:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:12.551 21:31:33 -- accel/accel.sh@41 -- # local IFS=, 00:18:12.551 21:31:33 -- accel/accel.sh@42 -- # jq -r . 00:18:12.551 [2024-07-11 21:31:33.320327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:12.551 [2024-07-11 21:31:33.320516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68625 ] 00:18:12.551 [2024-07-11 21:31:33.465390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.808 [2024-07-11 21:31:33.567970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.181 21:31:34 -- accel/accel.sh@18 -- # out=' 00:18:14.181 SPDK Configuration: 00:18:14.181 Core mask: 0x1 00:18:14.181 00:18:14.181 Accel Perf Configuration: 00:18:14.181 Workload Type: compare 00:18:14.181 Transfer size: 4096 bytes 00:18:14.181 Vector count 1 00:18:14.181 Module: software 00:18:14.181 Queue depth: 32 00:18:14.181 Allocate depth: 32 00:18:14.181 # threads/core: 1 00:18:14.181 Run time: 1 seconds 00:18:14.181 Verify: Yes 00:18:14.181 00:18:14.181 Running for 1 seconds... 00:18:14.181 00:18:14.181 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:14.181 ------------------------------------------------------------------------------------ 00:18:14.181 0,0 441088/s 1723 MiB/s 0 0 00:18:14.181 ==================================================================================== 00:18:14.181 Total 441088/s 1723 MiB/s 0 0' 00:18:14.181 21:31:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:18:14.181 21:31:34 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:34 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:18:14.181 21:31:34 -- accel/accel.sh@12 -- # build_accel_config 00:18:14.181 21:31:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:14.181 21:31:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:14.181 21:31:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:14.181 21:31:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:14.181 21:31:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:14.181 21:31:34 -- accel/accel.sh@41 -- # local IFS=, 00:18:14.181 21:31:34 -- accel/accel.sh@42 -- # jq -r . 00:18:14.181 [2024-07-11 21:31:34.807820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:14.181 [2024-07-11 21:31:34.807952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68650 ] 00:18:14.181 [2024-07-11 21:31:34.954431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.181 [2024-07-11 21:31:35.051134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val= 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val= 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val=0x1 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val= 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val= 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val=compare 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@24 -- # accel_opc=compare 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val= 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.181 21:31:35 -- accel/accel.sh@21 -- # val=software 00:18:14.181 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.181 21:31:35 -- accel/accel.sh@23 -- # accel_module=software 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.181 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.182 21:31:35 -- accel/accel.sh@21 -- # val=32 00:18:14.182 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.182 21:31:35 -- accel/accel.sh@21 -- # val=32 00:18:14.182 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.182 21:31:35 -- accel/accel.sh@21 -- # val=1 00:18:14.182 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.182 21:31:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:14.182 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.182 21:31:35 -- accel/accel.sh@21 -- # val=Yes 00:18:14.182 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.182 21:31:35 -- accel/accel.sh@21 -- # val= 00:18:14.182 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:14.182 21:31:35 -- accel/accel.sh@21 -- # val= 00:18:14.182 21:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # IFS=: 00:18:14.182 21:31:35 -- accel/accel.sh@20 -- # read -r var val 00:18:15.554 21:31:36 -- accel/accel.sh@21 -- # val= 00:18:15.554 21:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # IFS=: 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # read -r var val 00:18:15.554 21:31:36 -- accel/accel.sh@21 -- # val= 00:18:15.554 21:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # IFS=: 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # read -r var val 00:18:15.554 21:31:36 -- accel/accel.sh@21 -- # val= 00:18:15.554 21:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # IFS=: 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # read -r var val 00:18:15.554 21:31:36 -- accel/accel.sh@21 -- # val= 00:18:15.554 21:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # IFS=: 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # read -r var val 00:18:15.554 21:31:36 -- accel/accel.sh@21 -- # val= 00:18:15.554 21:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # IFS=: 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # read -r var val 00:18:15.554 21:31:36 -- accel/accel.sh@21 -- # val= 00:18:15.554 21:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # IFS=: 00:18:15.554 21:31:36 -- accel/accel.sh@20 -- # read -r var val 00:18:15.554 21:31:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:15.554 21:31:36 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:18:15.554 21:31:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:15.554 00:18:15.554 real 0m2.974s 00:18:15.554 user 0m2.515s 00:18:15.554 sys 0m0.251s 00:18:15.554 21:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.554 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:18:15.554 ************************************ 00:18:15.554 END TEST accel_compare 00:18:15.554 ************************************ 00:18:15.554 21:31:36 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:18:15.554 21:31:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:18:15.554 21:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:15.554 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:18:15.554 ************************************ 00:18:15.554 START TEST accel_xor 00:18:15.554 ************************************ 00:18:15.554 21:31:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:18:15.554 21:31:36 -- accel/accel.sh@16 -- # local accel_opc 00:18:15.554 21:31:36 -- accel/accel.sh@17 -- # local accel_module 00:18:15.554 21:31:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:18:15.554 21:31:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:18:15.554 21:31:36 -- accel/accel.sh@12 -- # build_accel_config 00:18:15.554 21:31:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:15.554 21:31:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:15.554 21:31:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:15.554 21:31:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:15.554 21:31:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:15.554 21:31:36 -- accel/accel.sh@41 -- # local IFS=, 00:18:15.554 21:31:36 -- accel/accel.sh@42 -- # jq -r . 00:18:15.554 [2024-07-11 21:31:36.332968] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:15.554 [2024-07-11 21:31:36.333071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68679 ] 00:18:15.554 [2024-07-11 21:31:36.467456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.813 [2024-07-11 21:31:36.566932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.186 21:31:37 -- accel/accel.sh@18 -- # out=' 00:18:17.186 SPDK Configuration: 00:18:17.186 Core mask: 0x1 00:18:17.186 00:18:17.186 Accel Perf Configuration: 00:18:17.186 Workload Type: xor 00:18:17.186 Source buffers: 2 00:18:17.186 Transfer size: 4096 bytes 00:18:17.186 Vector count 1 00:18:17.186 Module: software 00:18:17.186 Queue depth: 32 00:18:17.186 Allocate depth: 32 00:18:17.186 # threads/core: 1 00:18:17.186 Run time: 1 seconds 00:18:17.186 Verify: Yes 00:18:17.186 00:18:17.186 Running for 1 seconds... 00:18:17.186 00:18:17.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:17.186 ------------------------------------------------------------------------------------ 00:18:17.186 0,0 239360/s 935 MiB/s 0 0 00:18:17.186 ==================================================================================== 00:18:17.186 Total 239360/s 935 MiB/s 0 0' 00:18:17.186 21:31:37 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:18:17.186 21:31:37 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:18:17.186 21:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:18:17.186 21:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:17.186 21:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:17.186 21:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:17.186 21:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:17.186 21:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:17.186 21:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:18:17.186 21:31:37 -- accel/accel.sh@42 -- # jq -r . 00:18:17.186 [2024-07-11 21:31:37.801352] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:17.186 [2024-07-11 21:31:37.801465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68704 ] 00:18:17.186 [2024-07-11 21:31:37.941116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.186 [2024-07-11 21:31:38.037139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val= 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val= 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val=0x1 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val= 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val= 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val=xor 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@24 -- # accel_opc=xor 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val=2 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val= 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val=software 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.186 21:31:38 -- accel/accel.sh@23 -- # accel_module=software 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.186 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.186 21:31:38 -- accel/accel.sh@21 -- # val=32 00:18:17.186 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.187 21:31:38 -- accel/accel.sh@21 -- # val=32 00:18:17.187 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.187 21:31:38 -- accel/accel.sh@21 -- # val=1 00:18:17.187 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.187 21:31:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:17.187 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.187 21:31:38 -- accel/accel.sh@21 -- # val=Yes 00:18:17.187 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.187 21:31:38 -- accel/accel.sh@21 -- # val= 00:18:17.187 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:17.187 21:31:38 -- accel/accel.sh@21 -- # val= 00:18:17.187 21:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # IFS=: 00:18:17.187 21:31:38 -- accel/accel.sh@20 -- # read -r var val 00:18:18.573 21:31:39 -- accel/accel.sh@21 -- # val= 00:18:18.573 21:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # IFS=: 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # read -r var val 00:18:18.573 21:31:39 -- accel/accel.sh@21 -- # val= 00:18:18.573 21:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # IFS=: 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # read -r var val 00:18:18.573 21:31:39 -- accel/accel.sh@21 -- # val= 00:18:18.573 21:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # IFS=: 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # read -r var val 00:18:18.573 21:31:39 -- accel/accel.sh@21 -- # val= 00:18:18.573 21:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # IFS=: 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # read -r var val 00:18:18.573 21:31:39 -- accel/accel.sh@21 -- # val= 00:18:18.573 21:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # IFS=: 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # read -r var val 00:18:18.573 21:31:39 -- accel/accel.sh@21 -- # val= 00:18:18.573 21:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # IFS=: 00:18:18.573 21:31:39 -- accel/accel.sh@20 -- # read -r var val 00:18:18.573 21:31:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:18.573 21:31:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:18:18.573 ************************************ 00:18:18.573 END TEST accel_xor 00:18:18.573 ************************************ 00:18:18.573 21:31:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:18.573 00:18:18.573 real 0m2.943s 00:18:18.573 user 0m2.498s 00:18:18.573 sys 0m0.237s 00:18:18.573 21:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.573 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:18:18.573 21:31:39 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:18:18.573 21:31:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:18:18.573 21:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:18.573 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:18:18.573 ************************************ 00:18:18.573 START TEST accel_xor 00:18:18.573 ************************************ 00:18:18.573 21:31:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:18:18.573 21:31:39 -- accel/accel.sh@16 -- # local accel_opc 00:18:18.573 21:31:39 -- accel/accel.sh@17 -- # local accel_module 00:18:18.573 21:31:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:18:18.573 21:31:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:18:18.573 21:31:39 -- accel/accel.sh@12 -- # build_accel_config 00:18:18.573 21:31:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:18.573 21:31:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:18.573 21:31:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:18.573 21:31:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:18.573 21:31:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:18.573 21:31:39 -- accel/accel.sh@41 -- # local IFS=, 00:18:18.573 21:31:39 -- accel/accel.sh@42 -- # jq -r . 00:18:18.573 [2024-07-11 21:31:39.310945] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:18.573 [2024-07-11 21:31:39.311040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68733 ] 00:18:18.573 [2024-07-11 21:31:39.456619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.836 [2024-07-11 21:31:39.552780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.241 21:31:40 -- accel/accel.sh@18 -- # out=' 00:18:20.241 SPDK Configuration: 00:18:20.241 Core mask: 0x1 00:18:20.241 00:18:20.241 Accel Perf Configuration: 00:18:20.241 Workload Type: xor 00:18:20.241 Source buffers: 3 00:18:20.241 Transfer size: 4096 bytes 00:18:20.241 Vector count 1 00:18:20.241 Module: software 00:18:20.241 Queue depth: 32 00:18:20.241 Allocate depth: 32 00:18:20.241 # threads/core: 1 00:18:20.241 Run time: 1 seconds 00:18:20.241 Verify: Yes 00:18:20.241 00:18:20.241 Running for 1 seconds... 00:18:20.241 00:18:20.241 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:20.241 ------------------------------------------------------------------------------------ 00:18:20.241 0,0 233600/s 912 MiB/s 0 0 00:18:20.241 ==================================================================================== 00:18:20.241 Total 233600/s 912 MiB/s 0 0' 00:18:20.241 21:31:40 -- accel/accel.sh@20 -- # IFS=: 00:18:20.241 21:31:40 -- accel/accel.sh@20 -- # read -r var val 00:18:20.241 21:31:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:18:20.241 21:31:40 -- accel/accel.sh@12 -- # build_accel_config 00:18:20.241 21:31:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:18:20.241 21:31:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:20.241 21:31:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:20.241 21:31:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:20.241 21:31:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:20.241 21:31:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:20.241 21:31:40 -- accel/accel.sh@41 -- # local IFS=, 00:18:20.241 21:31:40 -- accel/accel.sh@42 -- # jq -r . 00:18:20.241 [2024-07-11 21:31:40.788324] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:20.241 [2024-07-11 21:31:40.788434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68753 ] 00:18:20.241 [2024-07-11 21:31:40.928102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.241 [2024-07-11 21:31:41.025139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.241 21:31:41 -- accel/accel.sh@21 -- # val= 00:18:20.241 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.241 21:31:41 -- accel/accel.sh@21 -- # val= 00:18:20.241 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.241 21:31:41 -- accel/accel.sh@21 -- # val=0x1 00:18:20.241 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.241 21:31:41 -- accel/accel.sh@21 -- # val= 00:18:20.241 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.241 21:31:41 -- accel/accel.sh@21 -- # val= 00:18:20.241 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.241 21:31:41 -- accel/accel.sh@21 -- # val=xor 00:18:20.241 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.241 21:31:41 -- accel/accel.sh@24 -- # accel_opc=xor 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.241 21:31:41 -- accel/accel.sh@21 -- # val=3 00:18:20.241 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.241 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val= 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val=software 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@23 -- # accel_module=software 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val=32 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val=32 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val=1 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val=Yes 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val= 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:20.242 21:31:41 -- accel/accel.sh@21 -- # val= 00:18:20.242 21:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # IFS=: 00:18:20.242 21:31:41 -- accel/accel.sh@20 -- # read -r var val 00:18:21.616 21:31:42 -- accel/accel.sh@21 -- # val= 00:18:21.616 21:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # IFS=: 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # read -r var val 00:18:21.616 21:31:42 -- accel/accel.sh@21 -- # val= 00:18:21.616 21:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # IFS=: 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # read -r var val 00:18:21.616 21:31:42 -- accel/accel.sh@21 -- # val= 00:18:21.616 21:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # IFS=: 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # read -r var val 00:18:21.616 21:31:42 -- accel/accel.sh@21 -- # val= 00:18:21.616 21:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # IFS=: 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # read -r var val 00:18:21.616 21:31:42 -- accel/accel.sh@21 -- # val= 00:18:21.616 21:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # IFS=: 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # read -r var val 00:18:21.616 21:31:42 -- accel/accel.sh@21 -- # val= 00:18:21.616 21:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # IFS=: 00:18:21.616 21:31:42 -- accel/accel.sh@20 -- # read -r var val 00:18:21.616 21:31:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:21.616 21:31:42 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:18:21.616 21:31:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:21.616 00:18:21.616 real 0m2.942s 00:18:21.616 user 0m2.507s 00:18:21.616 sys 0m0.230s 00:18:21.616 ************************************ 00:18:21.616 END TEST accel_xor 00:18:21.616 ************************************ 00:18:21.616 21:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:21.616 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:18:21.616 21:31:42 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:18:21.616 21:31:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:18:21.616 21:31:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:21.616 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:18:21.616 ************************************ 00:18:21.616 START TEST accel_dif_verify 00:18:21.616 ************************************ 00:18:21.616 21:31:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:18:21.616 21:31:42 -- accel/accel.sh@16 -- # local accel_opc 00:18:21.616 21:31:42 -- accel/accel.sh@17 -- # local accel_module 00:18:21.616 21:31:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:18:21.616 21:31:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:18:21.616 21:31:42 -- accel/accel.sh@12 -- # build_accel_config 00:18:21.616 21:31:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:21.616 21:31:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:21.616 21:31:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:21.616 21:31:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:21.616 21:31:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:21.616 21:31:42 -- accel/accel.sh@41 -- # local IFS=, 00:18:21.616 21:31:42 -- accel/accel.sh@42 -- # jq -r . 00:18:21.616 [2024-07-11 21:31:42.302876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:21.616 [2024-07-11 21:31:42.302978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68787 ] 00:18:21.616 [2024-07-11 21:31:42.439987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.616 [2024-07-11 21:31:42.545381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.031 21:31:43 -- accel/accel.sh@18 -- # out=' 00:18:23.031 SPDK Configuration: 00:18:23.031 Core mask: 0x1 00:18:23.031 00:18:23.031 Accel Perf Configuration: 00:18:23.031 Workload Type: dif_verify 00:18:23.031 Vector size: 4096 bytes 00:18:23.031 Transfer size: 4096 bytes 00:18:23.031 Block size: 512 bytes 00:18:23.031 Metadata size: 8 bytes 00:18:23.031 Vector count 1 00:18:23.031 Module: software 00:18:23.031 Queue depth: 32 00:18:23.031 Allocate depth: 32 00:18:23.031 # threads/core: 1 00:18:23.031 Run time: 1 seconds 00:18:23.031 Verify: No 00:18:23.031 00:18:23.031 Running for 1 seconds... 00:18:23.031 00:18:23.031 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:23.031 ------------------------------------------------------------------------------------ 00:18:23.031 0,0 93280/s 370 MiB/s 0 0 00:18:23.031 ==================================================================================== 00:18:23.031 Total 93280/s 364 MiB/s 0 0' 00:18:23.031 21:31:43 -- accel/accel.sh@20 -- # IFS=: 00:18:23.031 21:31:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:18:23.031 21:31:43 -- accel/accel.sh@20 -- # read -r var val 00:18:23.031 21:31:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:18:23.031 21:31:43 -- accel/accel.sh@12 -- # build_accel_config 00:18:23.031 21:31:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:23.031 21:31:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:23.031 21:31:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:23.031 21:31:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:23.031 21:31:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:23.031 21:31:43 -- accel/accel.sh@41 -- # local IFS=, 00:18:23.031 21:31:43 -- accel/accel.sh@42 -- # jq -r . 00:18:23.031 [2024-07-11 21:31:43.782392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:23.031 [2024-07-11 21:31:43.782503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68807 ] 00:18:23.031 [2024-07-11 21:31:43.915717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.306 [2024-07-11 21:31:44.014260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val= 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val= 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val=0x1 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val= 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val= 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val=dif_verify 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val= 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val=software 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@23 -- # accel_module=software 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val=32 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val=32 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val=1 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val=No 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val= 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:23.306 21:31:44 -- accel/accel.sh@21 -- # val= 00:18:23.306 21:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # IFS=: 00:18:23.306 21:31:44 -- accel/accel.sh@20 -- # read -r var val 00:18:24.680 21:31:45 -- accel/accel.sh@21 -- # val= 00:18:24.680 21:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:18:24.680 21:31:45 -- accel/accel.sh@20 -- # IFS=: 00:18:24.680 21:31:45 -- accel/accel.sh@20 -- # read -r var val 00:18:24.680 21:31:45 -- accel/accel.sh@21 -- # val= 00:18:24.680 21:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:18:24.680 21:31:45 -- accel/accel.sh@20 -- # IFS=: 00:18:24.680 21:31:45 -- accel/accel.sh@20 -- # read -r var val 00:18:24.680 21:31:45 -- accel/accel.sh@21 -- # val= 00:18:24.681 21:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # IFS=: 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # read -r var val 00:18:24.681 21:31:45 -- accel/accel.sh@21 -- # val= 00:18:24.681 21:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # IFS=: 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # read -r var val 00:18:24.681 21:31:45 -- accel/accel.sh@21 -- # val= 00:18:24.681 21:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # IFS=: 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # read -r var val 00:18:24.681 21:31:45 -- accel/accel.sh@21 -- # val= 00:18:24.681 21:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # IFS=: 00:18:24.681 21:31:45 -- accel/accel.sh@20 -- # read -r var val 00:18:24.681 21:31:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:24.681 21:31:45 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:18:24.681 21:31:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:24.681 00:18:24.681 real 0m2.953s 00:18:24.681 user 0m2.519s 00:18:24.681 sys 0m0.230s 00:18:24.681 21:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.681 ************************************ 00:18:24.681 END TEST accel_dif_verify 00:18:24.681 ************************************ 00:18:24.681 21:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:24.681 21:31:45 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:18:24.681 21:31:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:18:24.681 21:31:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:24.681 21:31:45 -- common/autotest_common.sh@10 -- # set +x 00:18:24.681 ************************************ 00:18:24.681 START TEST accel_dif_generate 00:18:24.681 ************************************ 00:18:24.681 21:31:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:18:24.681 21:31:45 -- accel/accel.sh@16 -- # local accel_opc 00:18:24.681 21:31:45 -- accel/accel.sh@17 -- # local accel_module 00:18:24.681 21:31:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:18:24.681 21:31:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:18:24.681 21:31:45 -- accel/accel.sh@12 -- # build_accel_config 00:18:24.681 21:31:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:24.681 21:31:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:24.681 21:31:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:24.681 21:31:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:24.681 21:31:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:24.681 21:31:45 -- accel/accel.sh@41 -- # local IFS=, 00:18:24.681 21:31:45 -- accel/accel.sh@42 -- # jq -r . 00:18:24.681 [2024-07-11 21:31:45.300814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:24.681 [2024-07-11 21:31:45.300954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68841 ] 00:18:24.681 [2024-07-11 21:31:45.440383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.681 [2024-07-11 21:31:45.541107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.056 21:31:46 -- accel/accel.sh@18 -- # out=' 00:18:26.056 SPDK Configuration: 00:18:26.056 Core mask: 0x1 00:18:26.056 00:18:26.056 Accel Perf Configuration: 00:18:26.056 Workload Type: dif_generate 00:18:26.056 Vector size: 4096 bytes 00:18:26.056 Transfer size: 4096 bytes 00:18:26.056 Block size: 512 bytes 00:18:26.056 Metadata size: 8 bytes 00:18:26.056 Vector count 1 00:18:26.056 Module: software 00:18:26.056 Queue depth: 32 00:18:26.056 Allocate depth: 32 00:18:26.056 # threads/core: 1 00:18:26.056 Run time: 1 seconds 00:18:26.056 Verify: No 00:18:26.056 00:18:26.056 Running for 1 seconds... 00:18:26.056 00:18:26.056 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:26.056 ------------------------------------------------------------------------------------ 00:18:26.056 0,0 120768/s 479 MiB/s 0 0 00:18:26.056 ==================================================================================== 00:18:26.056 Total 120768/s 471 MiB/s 0 0' 00:18:26.056 21:31:46 -- accel/accel.sh@20 -- # IFS=: 00:18:26.056 21:31:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:18:26.056 21:31:46 -- accel/accel.sh@20 -- # read -r var val 00:18:26.056 21:31:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:18:26.056 21:31:46 -- accel/accel.sh@12 -- # build_accel_config 00:18:26.056 21:31:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:26.056 21:31:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:26.056 21:31:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:26.056 21:31:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:26.056 21:31:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:26.056 21:31:46 -- accel/accel.sh@41 -- # local IFS=, 00:18:26.056 21:31:46 -- accel/accel.sh@42 -- # jq -r . 00:18:26.056 [2024-07-11 21:31:46.778743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:26.056 [2024-07-11 21:31:46.778878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68861 ] 00:18:26.056 [2024-07-11 21:31:46.922452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.315 [2024-07-11 21:31:47.018509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val= 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val= 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val=0x1 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val= 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val= 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val=dif_generate 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val='512 bytes' 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val='8 bytes' 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val= 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val=software 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@23 -- # accel_module=software 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val=32 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val=32 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val=1 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val=No 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val= 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:26.315 21:31:47 -- accel/accel.sh@21 -- # val= 00:18:26.315 21:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # IFS=: 00:18:26.315 21:31:47 -- accel/accel.sh@20 -- # read -r var val 00:18:27.691 21:31:48 -- accel/accel.sh@21 -- # val= 00:18:27.691 21:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # IFS=: 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # read -r var val 00:18:27.691 21:31:48 -- accel/accel.sh@21 -- # val= 00:18:27.691 21:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # IFS=: 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # read -r var val 00:18:27.691 21:31:48 -- accel/accel.sh@21 -- # val= 00:18:27.691 21:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # IFS=: 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # read -r var val 00:18:27.691 21:31:48 -- accel/accel.sh@21 -- # val= 00:18:27.691 21:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # IFS=: 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # read -r var val 00:18:27.691 21:31:48 -- accel/accel.sh@21 -- # val= 00:18:27.691 21:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # IFS=: 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # read -r var val 00:18:27.691 21:31:48 -- accel/accel.sh@21 -- # val= 00:18:27.691 21:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # IFS=: 00:18:27.691 21:31:48 -- accel/accel.sh@20 -- # read -r var val 00:18:27.691 21:31:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:27.691 21:31:48 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:18:27.691 21:31:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:27.691 00:18:27.691 real 0m2.958s 00:18:27.691 user 0m2.521s 00:18:27.691 sys 0m0.233s 00:18:27.691 21:31:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.691 ************************************ 00:18:27.691 END TEST accel_dif_generate 00:18:27.691 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:18:27.691 ************************************ 00:18:27.691 21:31:48 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:18:27.691 21:31:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:18:27.691 21:31:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:27.691 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:18:27.691 ************************************ 00:18:27.691 START TEST accel_dif_generate_copy 00:18:27.691 ************************************ 00:18:27.691 21:31:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:18:27.691 21:31:48 -- accel/accel.sh@16 -- # local accel_opc 00:18:27.691 21:31:48 -- accel/accel.sh@17 -- # local accel_module 00:18:27.691 21:31:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:18:27.691 21:31:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:18:27.691 21:31:48 -- accel/accel.sh@12 -- # build_accel_config 00:18:27.691 21:31:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:27.691 21:31:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:27.691 21:31:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:27.691 21:31:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:27.691 21:31:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:27.691 21:31:48 -- accel/accel.sh@41 -- # local IFS=, 00:18:27.691 21:31:48 -- accel/accel.sh@42 -- # jq -r . 00:18:27.691 [2024-07-11 21:31:48.305779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:27.692 [2024-07-11 21:31:48.305896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68895 ] 00:18:27.692 [2024-07-11 21:31:48.444699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.692 [2024-07-11 21:31:48.549928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.066 21:31:49 -- accel/accel.sh@18 -- # out=' 00:18:29.066 SPDK Configuration: 00:18:29.066 Core mask: 0x1 00:18:29.066 00:18:29.066 Accel Perf Configuration: 00:18:29.066 Workload Type: dif_generate_copy 00:18:29.066 Vector size: 4096 bytes 00:18:29.066 Transfer size: 4096 bytes 00:18:29.066 Vector count 1 00:18:29.066 Module: software 00:18:29.066 Queue depth: 32 00:18:29.066 Allocate depth: 32 00:18:29.066 # threads/core: 1 00:18:29.066 Run time: 1 seconds 00:18:29.066 Verify: No 00:18:29.066 00:18:29.066 Running for 1 seconds... 00:18:29.066 00:18:29.066 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:29.066 ------------------------------------------------------------------------------------ 00:18:29.066 0,0 86688/s 343 MiB/s 0 0 00:18:29.066 ==================================================================================== 00:18:29.066 Total 86688/s 338 MiB/s 0 0' 00:18:29.066 21:31:49 -- accel/accel.sh@20 -- # IFS=: 00:18:29.066 21:31:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:18:29.066 21:31:49 -- accel/accel.sh@20 -- # read -r var val 00:18:29.066 21:31:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:18:29.066 21:31:49 -- accel/accel.sh@12 -- # build_accel_config 00:18:29.066 21:31:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:29.066 21:31:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:29.066 21:31:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:29.066 21:31:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:29.066 21:31:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:29.066 21:31:49 -- accel/accel.sh@41 -- # local IFS=, 00:18:29.066 21:31:49 -- accel/accel.sh@42 -- # jq -r . 00:18:29.066 [2024-07-11 21:31:49.797276] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:29.066 [2024-07-11 21:31:49.797386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68915 ] 00:18:29.066 [2024-07-11 21:31:49.938230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.324 [2024-07-11 21:31:50.047252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val= 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val= 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val=0x1 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val= 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val= 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val= 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val=software 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@23 -- # accel_module=software 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val=32 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val=32 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val=1 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val=No 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val= 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:29.324 21:31:50 -- accel/accel.sh@21 -- # val= 00:18:29.324 21:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # IFS=: 00:18:29.324 21:31:50 -- accel/accel.sh@20 -- # read -r var val 00:18:30.697 21:31:51 -- accel/accel.sh@21 -- # val= 00:18:30.697 21:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # IFS=: 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # read -r var val 00:18:30.697 21:31:51 -- accel/accel.sh@21 -- # val= 00:18:30.697 21:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # IFS=: 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # read -r var val 00:18:30.697 21:31:51 -- accel/accel.sh@21 -- # val= 00:18:30.697 21:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # IFS=: 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # read -r var val 00:18:30.697 21:31:51 -- accel/accel.sh@21 -- # val= 00:18:30.697 21:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # IFS=: 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # read -r var val 00:18:30.697 21:31:51 -- accel/accel.sh@21 -- # val= 00:18:30.697 21:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # IFS=: 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # read -r var val 00:18:30.697 21:31:51 -- accel/accel.sh@21 -- # val= 00:18:30.697 21:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # IFS=: 00:18:30.697 21:31:51 -- accel/accel.sh@20 -- # read -r var val 00:18:30.697 21:31:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:30.697 21:31:51 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:18:30.697 21:31:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:30.697 00:18:30.697 real 0m2.995s 00:18:30.697 user 0m2.557s 00:18:30.697 sys 0m0.230s 00:18:30.697 21:31:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.697 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:30.697 ************************************ 00:18:30.697 END TEST accel_dif_generate_copy 00:18:30.697 ************************************ 00:18:30.697 21:31:51 -- accel/accel.sh@107 -- # [[ y == y ]] 00:18:30.698 21:31:51 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:30.698 21:31:51 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:18:30.698 21:31:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:30.698 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:30.698 ************************************ 00:18:30.698 START TEST accel_comp 00:18:30.698 ************************************ 00:18:30.698 21:31:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:30.698 21:31:51 -- accel/accel.sh@16 -- # local accel_opc 00:18:30.698 21:31:51 -- accel/accel.sh@17 -- # local accel_module 00:18:30.698 21:31:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:30.698 21:31:51 -- accel/accel.sh@12 -- # build_accel_config 00:18:30.698 21:31:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:30.698 21:31:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:30.698 21:31:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:30.698 21:31:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:30.698 21:31:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:30.698 21:31:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:30.698 21:31:51 -- accel/accel.sh@41 -- # local IFS=, 00:18:30.698 21:31:51 -- accel/accel.sh@42 -- # jq -r . 00:18:30.698 [2024-07-11 21:31:51.342869] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:30.698 [2024-07-11 21:31:51.342996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68944 ] 00:18:30.698 [2024-07-11 21:31:51.485477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.698 [2024-07-11 21:31:51.591425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.077 21:31:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:18:32.077 00:18:32.077 SPDK Configuration: 00:18:32.077 Core mask: 0x1 00:18:32.077 00:18:32.077 Accel Perf Configuration: 00:18:32.077 Workload Type: compress 00:18:32.077 Transfer size: 4096 bytes 00:18:32.077 Vector count 1 00:18:32.077 Module: software 00:18:32.077 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:32.077 Queue depth: 32 00:18:32.077 Allocate depth: 32 00:18:32.077 # threads/core: 1 00:18:32.077 Run time: 1 seconds 00:18:32.077 Verify: No 00:18:32.077 00:18:32.077 Running for 1 seconds... 00:18:32.077 00:18:32.077 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:32.077 ------------------------------------------------------------------------------------ 00:18:32.077 0,0 47712/s 198 MiB/s 0 0 00:18:32.077 ==================================================================================== 00:18:32.077 Total 47712/s 186 MiB/s 0 0' 00:18:32.077 21:31:52 -- accel/accel.sh@20 -- # IFS=: 00:18:32.077 21:31:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:32.077 21:31:52 -- accel/accel.sh@20 -- # read -r var val 00:18:32.077 21:31:52 -- accel/accel.sh@12 -- # build_accel_config 00:18:32.077 21:31:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:32.077 21:31:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:32.077 21:31:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:32.077 21:31:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:32.077 21:31:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:32.077 21:31:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:32.077 21:31:52 -- accel/accel.sh@41 -- # local IFS=, 00:18:32.077 21:31:52 -- accel/accel.sh@42 -- # jq -r . 00:18:32.077 [2024-07-11 21:31:52.841120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:32.077 [2024-07-11 21:31:52.841231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68969 ] 00:18:32.077 [2024-07-11 21:31:52.978444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.334 [2024-07-11 21:31:53.084524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.334 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.334 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.334 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.334 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.334 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.334 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.334 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.334 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=0x1 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=compress 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@24 -- # accel_opc=compress 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=software 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@23 -- # accel_module=software 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=32 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=32 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=1 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val=No 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:32.335 21:31:53 -- accel/accel.sh@21 -- # val= 00:18:32.335 21:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # IFS=: 00:18:32.335 21:31:53 -- accel/accel.sh@20 -- # read -r var val 00:18:33.708 21:31:54 -- accel/accel.sh@21 -- # val= 00:18:33.708 21:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # IFS=: 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # read -r var val 00:18:33.708 21:31:54 -- accel/accel.sh@21 -- # val= 00:18:33.708 21:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # IFS=: 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # read -r var val 00:18:33.708 21:31:54 -- accel/accel.sh@21 -- # val= 00:18:33.708 21:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # IFS=: 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # read -r var val 00:18:33.708 21:31:54 -- accel/accel.sh@21 -- # val= 00:18:33.708 21:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # IFS=: 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # read -r var val 00:18:33.708 ************************************ 00:18:33.708 END TEST accel_comp 00:18:33.708 ************************************ 00:18:33.708 21:31:54 -- accel/accel.sh@21 -- # val= 00:18:33.708 21:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # IFS=: 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # read -r var val 00:18:33.708 21:31:54 -- accel/accel.sh@21 -- # val= 00:18:33.708 21:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # IFS=: 00:18:33.708 21:31:54 -- accel/accel.sh@20 -- # read -r var val 00:18:33.708 21:31:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:33.708 21:31:54 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:18:33.708 21:31:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:33.708 00:18:33.708 real 0m3.014s 00:18:33.708 user 0m2.566s 00:18:33.708 sys 0m0.237s 00:18:33.708 21:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.708 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:18:33.708 21:31:54 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:33.708 21:31:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:18:33.708 21:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:33.708 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:18:33.708 ************************************ 00:18:33.708 START TEST accel_decomp 00:18:33.708 ************************************ 00:18:33.708 21:31:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:33.708 21:31:54 -- accel/accel.sh@16 -- # local accel_opc 00:18:33.708 21:31:54 -- accel/accel.sh@17 -- # local accel_module 00:18:33.708 21:31:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:33.708 21:31:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:33.708 21:31:54 -- accel/accel.sh@12 -- # build_accel_config 00:18:33.708 21:31:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:33.708 21:31:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:33.708 21:31:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:33.708 21:31:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:33.708 21:31:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:33.708 21:31:54 -- accel/accel.sh@41 -- # local IFS=, 00:18:33.708 21:31:54 -- accel/accel.sh@42 -- # jq -r . 00:18:33.708 [2024-07-11 21:31:54.410328] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:33.708 [2024-07-11 21:31:54.410433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68998 ] 00:18:33.708 [2024-07-11 21:31:54.547829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.967 [2024-07-11 21:31:54.658339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.344 21:31:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:18:35.344 00:18:35.344 SPDK Configuration: 00:18:35.344 Core mask: 0x1 00:18:35.344 00:18:35.344 Accel Perf Configuration: 00:18:35.344 Workload Type: decompress 00:18:35.344 Transfer size: 4096 bytes 00:18:35.344 Vector count 1 00:18:35.344 Module: software 00:18:35.344 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:35.344 Queue depth: 32 00:18:35.344 Allocate depth: 32 00:18:35.344 # threads/core: 1 00:18:35.344 Run time: 1 seconds 00:18:35.344 Verify: Yes 00:18:35.344 00:18:35.344 Running for 1 seconds... 00:18:35.344 00:18:35.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:35.344 ------------------------------------------------------------------------------------ 00:18:35.344 0,0 63584/s 117 MiB/s 0 0 00:18:35.344 ==================================================================================== 00:18:35.344 Total 63584/s 248 MiB/s 0 0' 00:18:35.344 21:31:55 -- accel/accel.sh@20 -- # IFS=: 00:18:35.344 21:31:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:35.344 21:31:55 -- accel/accel.sh@20 -- # read -r var val 00:18:35.344 21:31:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:35.344 21:31:55 -- accel/accel.sh@12 -- # build_accel_config 00:18:35.344 21:31:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:35.344 21:31:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:35.344 21:31:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:35.344 21:31:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:35.344 21:31:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:35.344 21:31:55 -- accel/accel.sh@41 -- # local IFS=, 00:18:35.344 21:31:55 -- accel/accel.sh@42 -- # jq -r . 00:18:35.344 [2024-07-11 21:31:55.909667] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:35.344 [2024-07-11 21:31:55.909778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69023 ] 00:18:35.344 [2024-07-11 21:31:56.048094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.344 [2024-07-11 21:31:56.161762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.344 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.344 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.344 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.344 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.344 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.344 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.344 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.344 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.344 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.344 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.344 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.344 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=0x1 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=decompress 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=software 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@23 -- # accel_module=software 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=32 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=32 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=1 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val=Yes 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:35.345 21:31:56 -- accel/accel.sh@21 -- # val= 00:18:35.345 21:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # IFS=: 00:18:35.345 21:31:56 -- accel/accel.sh@20 -- # read -r var val 00:18:36.719 21:31:57 -- accel/accel.sh@21 -- # val= 00:18:36.719 21:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # IFS=: 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # read -r var val 00:18:36.719 21:31:57 -- accel/accel.sh@21 -- # val= 00:18:36.719 21:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # IFS=: 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # read -r var val 00:18:36.719 21:31:57 -- accel/accel.sh@21 -- # val= 00:18:36.719 21:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # IFS=: 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # read -r var val 00:18:36.719 21:31:57 -- accel/accel.sh@21 -- # val= 00:18:36.719 21:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # IFS=: 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # read -r var val 00:18:36.719 21:31:57 -- accel/accel.sh@21 -- # val= 00:18:36.719 21:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # IFS=: 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # read -r var val 00:18:36.719 21:31:57 -- accel/accel.sh@21 -- # val= 00:18:36.719 21:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # IFS=: 00:18:36.719 21:31:57 -- accel/accel.sh@20 -- # read -r var val 00:18:36.719 21:31:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:36.719 21:31:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:18:36.719 21:31:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:36.719 00:18:36.719 real 0m3.016s 00:18:36.719 user 0m2.564s 00:18:36.719 sys 0m0.240s 00:18:36.719 ************************************ 00:18:36.719 END TEST accel_decomp 00:18:36.719 ************************************ 00:18:36.719 21:31:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:36.719 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 21:31:57 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:36.719 21:31:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:18:36.719 21:31:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:36.719 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:18:36.719 ************************************ 00:18:36.719 START TEST accel_decmop_full 00:18:36.719 ************************************ 00:18:36.719 21:31:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:36.719 21:31:57 -- accel/accel.sh@16 -- # local accel_opc 00:18:36.719 21:31:57 -- accel/accel.sh@17 -- # local accel_module 00:18:36.719 21:31:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:36.719 21:31:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:36.719 21:31:57 -- accel/accel.sh@12 -- # build_accel_config 00:18:36.719 21:31:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:36.719 21:31:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:36.719 21:31:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:36.719 21:31:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:36.719 21:31:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:36.719 21:31:57 -- accel/accel.sh@41 -- # local IFS=, 00:18:36.719 21:31:57 -- accel/accel.sh@42 -- # jq -r . 00:18:36.719 [2024-07-11 21:31:57.482650] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:36.719 [2024-07-11 21:31:57.483183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69052 ] 00:18:36.719 [2024-07-11 21:31:57.633100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.977 [2024-07-11 21:31:57.746879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.352 21:31:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:18:38.352 00:18:38.352 SPDK Configuration: 00:18:38.352 Core mask: 0x1 00:18:38.352 00:18:38.352 Accel Perf Configuration: 00:18:38.352 Workload Type: decompress 00:18:38.352 Transfer size: 111250 bytes 00:18:38.352 Vector count 1 00:18:38.352 Module: software 00:18:38.352 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:38.352 Queue depth: 32 00:18:38.352 Allocate depth: 32 00:18:38.352 # threads/core: 1 00:18:38.352 Run time: 1 seconds 00:18:38.352 Verify: Yes 00:18:38.352 00:18:38.352 Running for 1 seconds... 00:18:38.352 00:18:38.352 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:38.352 ------------------------------------------------------------------------------------ 00:18:38.352 0,0 4320/s 178 MiB/s 0 0 00:18:38.352 ==================================================================================== 00:18:38.352 Total 4320/s 458 MiB/s 0 0' 00:18:38.352 21:31:58 -- accel/accel.sh@20 -- # IFS=: 00:18:38.352 21:31:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:38.352 21:31:58 -- accel/accel.sh@20 -- # read -r var val 00:18:38.352 21:31:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:38.352 21:31:58 -- accel/accel.sh@12 -- # build_accel_config 00:18:38.352 21:31:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:38.352 21:31:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:38.352 21:31:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:38.352 21:31:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:38.352 21:31:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:38.352 21:31:59 -- accel/accel.sh@41 -- # local IFS=, 00:18:38.352 21:31:59 -- accel/accel.sh@42 -- # jq -r . 00:18:38.352 [2024-07-11 21:31:59.016547] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:38.352 [2024-07-11 21:31:59.016656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69077 ] 00:18:38.352 [2024-07-11 21:31:59.151901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.352 [2024-07-11 21:31:59.264505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.610 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.610 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.610 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.610 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.610 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.610 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.610 21:31:59 -- accel/accel.sh@21 -- # val=0x1 00:18:38.610 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.610 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.610 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.610 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val=decompress 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val='111250 bytes' 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val=software 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@23 -- # accel_module=software 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val=32 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val=32 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val=1 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val=Yes 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:38.611 21:31:59 -- accel/accel.sh@21 -- # val= 00:18:38.611 21:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # IFS=: 00:18:38.611 21:31:59 -- accel/accel.sh@20 -- # read -r var val 00:18:39.985 21:32:00 -- accel/accel.sh@21 -- # val= 00:18:39.985 21:32:00 -- accel/accel.sh@22 -- # case "$var" in 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # IFS=: 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # read -r var val 00:18:39.985 21:32:00 -- accel/accel.sh@21 -- # val= 00:18:39.985 21:32:00 -- accel/accel.sh@22 -- # case "$var" in 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # IFS=: 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # read -r var val 00:18:39.985 21:32:00 -- accel/accel.sh@21 -- # val= 00:18:39.985 21:32:00 -- accel/accel.sh@22 -- # case "$var" in 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # IFS=: 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # read -r var val 00:18:39.985 21:32:00 -- accel/accel.sh@21 -- # val= 00:18:39.985 21:32:00 -- accel/accel.sh@22 -- # case "$var" in 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # IFS=: 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # read -r var val 00:18:39.985 21:32:00 -- accel/accel.sh@21 -- # val= 00:18:39.985 21:32:00 -- accel/accel.sh@22 -- # case "$var" in 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # IFS=: 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # read -r var val 00:18:39.985 21:32:00 -- accel/accel.sh@21 -- # val= 00:18:39.985 21:32:00 -- accel/accel.sh@22 -- # case "$var" in 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # IFS=: 00:18:39.985 21:32:00 -- accel/accel.sh@20 -- # read -r var val 00:18:39.985 21:32:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:39.985 21:32:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:18:39.985 21:32:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:39.985 00:18:39.985 real 0m3.060s 00:18:39.985 user 0m2.593s 00:18:39.985 sys 0m0.256s 00:18:39.985 21:32:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.985 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 ************************************ 00:18:39.985 END TEST accel_decmop_full 00:18:39.985 ************************************ 00:18:39.985 21:32:00 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:39.985 21:32:00 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:18:39.985 21:32:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:39.985 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 ************************************ 00:18:39.985 START TEST accel_decomp_mcore 00:18:39.985 ************************************ 00:18:39.985 21:32:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:39.985 21:32:00 -- accel/accel.sh@16 -- # local accel_opc 00:18:39.985 21:32:00 -- accel/accel.sh@17 -- # local accel_module 00:18:39.985 21:32:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:39.985 21:32:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:39.985 21:32:00 -- accel/accel.sh@12 -- # build_accel_config 00:18:39.985 21:32:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:39.985 21:32:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:39.985 21:32:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:39.985 21:32:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:39.985 21:32:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:39.985 21:32:00 -- accel/accel.sh@41 -- # local IFS=, 00:18:39.985 21:32:00 -- accel/accel.sh@42 -- # jq -r . 00:18:39.985 [2024-07-11 21:32:00.583850] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:39.985 [2024-07-11 21:32:00.583969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69106 ] 00:18:39.985 [2024-07-11 21:32:00.724644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.985 [2024-07-11 21:32:00.829429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.985 [2024-07-11 21:32:00.829520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.985 [2024-07-11 21:32:00.829612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.985 [2024-07-11 21:32:00.829612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.357 21:32:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:18:41.357 00:18:41.357 SPDK Configuration: 00:18:41.357 Core mask: 0xf 00:18:41.357 00:18:41.357 Accel Perf Configuration: 00:18:41.357 Workload Type: decompress 00:18:41.357 Transfer size: 4096 bytes 00:18:41.357 Vector count 1 00:18:41.357 Module: software 00:18:41.357 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:41.357 Queue depth: 32 00:18:41.357 Allocate depth: 32 00:18:41.357 # threads/core: 1 00:18:41.357 Run time: 1 seconds 00:18:41.357 Verify: Yes 00:18:41.357 00:18:41.357 Running for 1 seconds... 00:18:41.357 00:18:41.357 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:41.357 ------------------------------------------------------------------------------------ 00:18:41.357 0,0 58080/s 107 MiB/s 0 0 00:18:41.357 3,0 57984/s 106 MiB/s 0 0 00:18:41.357 2,0 57952/s 106 MiB/s 0 0 00:18:41.357 1,0 58560/s 107 MiB/s 0 0 00:18:41.357 ==================================================================================== 00:18:41.357 Total 232576/s 908 MiB/s 0 0' 00:18:41.357 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.357 21:32:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:41.357 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.357 21:32:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:41.357 21:32:02 -- accel/accel.sh@12 -- # build_accel_config 00:18:41.357 21:32:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:41.357 21:32:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:41.357 21:32:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:41.357 21:32:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:41.357 21:32:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:41.357 21:32:02 -- accel/accel.sh@41 -- # local IFS=, 00:18:41.357 21:32:02 -- accel/accel.sh@42 -- # jq -r . 00:18:41.357 [2024-07-11 21:32:02.090417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:41.357 [2024-07-11 21:32:02.090913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69133 ] 00:18:41.357 [2024-07-11 21:32:02.234601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.614 [2024-07-11 21:32:02.340911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.614 [2024-07-11 21:32:02.341020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.614 [2024-07-11 21:32:02.341145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.614 [2024-07-11 21:32:02.341148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=0xf 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=decompress 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=software 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@23 -- # accel_module=software 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=32 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=32 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=1 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val=Yes 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:41.614 21:32:02 -- accel/accel.sh@21 -- # val= 00:18:41.614 21:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:18:41.614 21:32:02 -- accel/accel.sh@20 -- # IFS=: 00:18:41.615 21:32:02 -- accel/accel.sh@20 -- # read -r var val 00:18:43.070 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.070 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.070 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.070 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.070 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.070 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.070 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.071 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.071 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.071 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.071 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.071 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.071 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@21 -- # val= 00:18:43.071 21:32:03 -- accel/accel.sh@22 -- # case "$var" in 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # IFS=: 00:18:43.071 21:32:03 -- accel/accel.sh@20 -- # read -r var val 00:18:43.071 21:32:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:43.071 21:32:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:18:43.071 21:32:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:43.071 00:18:43.071 real 0m3.013s 00:18:43.071 user 0m9.387s 00:18:43.071 sys 0m0.265s 00:18:43.071 21:32:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.071 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:43.071 ************************************ 00:18:43.071 END TEST accel_decomp_mcore 00:18:43.071 ************************************ 00:18:43.071 21:32:03 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:43.071 21:32:03 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:18:43.071 21:32:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:43.071 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:43.071 ************************************ 00:18:43.071 START TEST accel_decomp_full_mcore 00:18:43.071 ************************************ 00:18:43.071 21:32:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:43.071 21:32:03 -- accel/accel.sh@16 -- # local accel_opc 00:18:43.071 21:32:03 -- accel/accel.sh@17 -- # local accel_module 00:18:43.071 21:32:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:43.071 21:32:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:43.071 21:32:03 -- accel/accel.sh@12 -- # build_accel_config 00:18:43.071 21:32:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:43.071 21:32:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:43.071 21:32:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:43.071 21:32:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:43.071 21:32:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:43.071 21:32:03 -- accel/accel.sh@41 -- # local IFS=, 00:18:43.071 21:32:03 -- accel/accel.sh@42 -- # jq -r . 00:18:43.071 [2024-07-11 21:32:03.640373] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:43.071 [2024-07-11 21:32:03.640520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69166 ] 00:18:43.071 [2024-07-11 21:32:03.778644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.071 [2024-07-11 21:32:03.884607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.071 [2024-07-11 21:32:03.884728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.071 [2024-07-11 21:32:03.884838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.071 [2024-07-11 21:32:03.884837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.444 21:32:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:18:44.444 00:18:44.444 SPDK Configuration: 00:18:44.444 Core mask: 0xf 00:18:44.444 00:18:44.444 Accel Perf Configuration: 00:18:44.444 Workload Type: decompress 00:18:44.444 Transfer size: 111250 bytes 00:18:44.444 Vector count 1 00:18:44.444 Module: software 00:18:44.444 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:44.444 Queue depth: 32 00:18:44.444 Allocate depth: 32 00:18:44.444 # threads/core: 1 00:18:44.444 Run time: 1 seconds 00:18:44.444 Verify: Yes 00:18:44.444 00:18:44.444 Running for 1 seconds... 00:18:44.444 00:18:44.444 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:44.444 ------------------------------------------------------------------------------------ 00:18:44.444 0,0 4384/s 181 MiB/s 0 0 00:18:44.444 3,0 4064/s 167 MiB/s 0 0 00:18:44.444 2,0 4192/s 173 MiB/s 0 0 00:18:44.444 1,0 4192/s 173 MiB/s 0 0 00:18:44.444 ==================================================================================== 00:18:44.444 Total 16832/s 1785 MiB/s 0 0' 00:18:44.444 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.444 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.444 21:32:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:44.444 21:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:44.444 21:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:18:44.444 21:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:44.444 21:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:44.444 21:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:44.445 21:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:44.445 21:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:44.445 21:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:18:44.445 21:32:05 -- accel/accel.sh@42 -- # jq -r . 00:18:44.445 [2024-07-11 21:32:05.148886] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:44.445 [2024-07-11 21:32:05.148996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69189 ] 00:18:44.445 [2024-07-11 21:32:05.283092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.445 [2024-07-11 21:32:05.381941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.445 [2024-07-11 21:32:05.382103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.445 [2024-07-11 21:32:05.382252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.445 [2024-07-11 21:32:05.382402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val=0xf 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val=decompress 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val='111250 bytes' 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val=software 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@23 -- # accel_module=software 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val=32 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val=32 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val=1 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.703 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.703 21:32:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:44.703 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.704 21:32:05 -- accel/accel.sh@21 -- # val=Yes 00:18:44.704 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.704 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.704 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:44.704 21:32:05 -- accel/accel.sh@21 -- # val= 00:18:44.704 21:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # IFS=: 00:18:44.704 21:32:05 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@21 -- # val= 00:18:46.078 21:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # IFS=: 00:18:46.078 21:32:06 -- accel/accel.sh@20 -- # read -r var val 00:18:46.078 21:32:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:46.078 21:32:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:18:46.078 21:32:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:46.078 00:18:46.078 real 0m3.002s 00:18:46.078 user 0m9.440s 00:18:46.078 sys 0m0.265s 00:18:46.078 21:32:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.078 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:18:46.078 ************************************ 00:18:46.078 END TEST accel_decomp_full_mcore 00:18:46.078 ************************************ 00:18:46.078 21:32:06 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:46.078 21:32:06 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:18:46.078 21:32:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:46.078 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:18:46.078 ************************************ 00:18:46.078 START TEST accel_decomp_mthread 00:18:46.078 ************************************ 00:18:46.078 21:32:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:46.078 21:32:06 -- accel/accel.sh@16 -- # local accel_opc 00:18:46.078 21:32:06 -- accel/accel.sh@17 -- # local accel_module 00:18:46.078 21:32:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:46.078 21:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:46.078 21:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:18:46.078 21:32:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:46.078 21:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:46.078 21:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:46.078 21:32:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:46.078 21:32:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:46.078 21:32:06 -- accel/accel.sh@41 -- # local IFS=, 00:18:46.078 21:32:06 -- accel/accel.sh@42 -- # jq -r . 00:18:46.078 [2024-07-11 21:32:06.688957] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:46.078 [2024-07-11 21:32:06.689086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69226 ] 00:18:46.078 [2024-07-11 21:32:06.827552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.078 [2024-07-11 21:32:06.929025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.450 21:32:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:18:47.450 00:18:47.450 SPDK Configuration: 00:18:47.450 Core mask: 0x1 00:18:47.450 00:18:47.450 Accel Perf Configuration: 00:18:47.450 Workload Type: decompress 00:18:47.450 Transfer size: 4096 bytes 00:18:47.450 Vector count 1 00:18:47.450 Module: software 00:18:47.450 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:47.450 Queue depth: 32 00:18:47.450 Allocate depth: 32 00:18:47.450 # threads/core: 2 00:18:47.450 Run time: 1 seconds 00:18:47.450 Verify: Yes 00:18:47.450 00:18:47.450 Running for 1 seconds... 00:18:47.450 00:18:47.450 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:47.450 ------------------------------------------------------------------------------------ 00:18:47.450 0,1 32960/s 60 MiB/s 0 0 00:18:47.451 0,0 32800/s 60 MiB/s 0 0 00:18:47.451 ==================================================================================== 00:18:47.451 Total 65760/s 256 MiB/s 0 0' 00:18:47.451 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.451 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.451 21:32:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:47.451 21:32:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:47.451 21:32:08 -- accel/accel.sh@12 -- # build_accel_config 00:18:47.451 21:32:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:47.451 21:32:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:47.451 21:32:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:47.451 21:32:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:47.451 21:32:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:47.451 21:32:08 -- accel/accel.sh@41 -- # local IFS=, 00:18:47.451 21:32:08 -- accel/accel.sh@42 -- # jq -r . 00:18:47.451 [2024-07-11 21:32:08.179430] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:47.451 [2024-07-11 21:32:08.179552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69246 ] 00:18:47.451 [2024-07-11 21:32:08.318625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.708 [2024-07-11 21:32:08.413817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val=0x1 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val=decompress 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val=software 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@23 -- # accel_module=software 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val=32 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.708 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.708 21:32:08 -- accel/accel.sh@21 -- # val=32 00:18:47.708 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.709 21:32:08 -- accel/accel.sh@21 -- # val=2 00:18:47.709 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.709 21:32:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:47.709 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.709 21:32:08 -- accel/accel.sh@21 -- # val=Yes 00:18:47.709 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.709 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.709 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:47.709 21:32:08 -- accel/accel.sh@21 -- # val= 00:18:47.709 21:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # IFS=: 00:18:47.709 21:32:08 -- accel/accel.sh@20 -- # read -r var val 00:18:49.081 21:32:09 -- accel/accel.sh@21 -- # val= 00:18:49.081 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:18:49.081 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:18:49.081 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:18:49.081 21:32:09 -- accel/accel.sh@21 -- # val= 00:18:49.081 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:18:49.081 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:18:49.081 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:18:49.081 21:32:09 -- accel/accel.sh@21 -- # val= 00:18:49.081 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:18:49.081 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:18:49.081 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:18:49.081 21:32:09 -- accel/accel.sh@21 -- # val= 00:18:49.081 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:18:49.082 21:32:09 -- accel/accel.sh@21 -- # val= 00:18:49.082 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:18:49.082 21:32:09 -- accel/accel.sh@21 -- # val= 00:18:49.082 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:18:49.082 21:32:09 -- accel/accel.sh@21 -- # val= 00:18:49.082 ************************************ 00:18:49.082 END TEST accel_decomp_mthread 00:18:49.082 ************************************ 00:18:49.082 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:18:49.082 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:18:49.082 21:32:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:49.082 21:32:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:18:49.082 21:32:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:49.082 00:18:49.082 real 0m2.969s 00:18:49.082 user 0m2.510s 00:18:49.082 sys 0m0.248s 00:18:49.082 21:32:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.082 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:49.082 21:32:09 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:49.082 21:32:09 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:18:49.082 21:32:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:49.082 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:49.082 ************************************ 00:18:49.082 START TEST accel_deomp_full_mthread 00:18:49.082 ************************************ 00:18:49.082 21:32:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:49.082 21:32:09 -- accel/accel.sh@16 -- # local accel_opc 00:18:49.082 21:32:09 -- accel/accel.sh@17 -- # local accel_module 00:18:49.082 21:32:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:49.082 21:32:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:49.082 21:32:09 -- accel/accel.sh@12 -- # build_accel_config 00:18:49.082 21:32:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:49.082 21:32:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:49.082 21:32:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:49.082 21:32:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:49.082 21:32:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:49.082 21:32:09 -- accel/accel.sh@41 -- # local IFS=, 00:18:49.082 21:32:09 -- accel/accel.sh@42 -- # jq -r . 00:18:49.082 [2024-07-11 21:32:09.707836] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:49.082 [2024-07-11 21:32:09.707950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69280 ] 00:18:49.082 [2024-07-11 21:32:09.848444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.082 [2024-07-11 21:32:09.947149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.455 21:32:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:18:50.455 00:18:50.455 SPDK Configuration: 00:18:50.455 Core mask: 0x1 00:18:50.455 00:18:50.455 Accel Perf Configuration: 00:18:50.455 Workload Type: decompress 00:18:50.455 Transfer size: 111250 bytes 00:18:50.456 Vector count 1 00:18:50.456 Module: software 00:18:50.456 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:50.456 Queue depth: 32 00:18:50.456 Allocate depth: 32 00:18:50.456 # threads/core: 2 00:18:50.456 Run time: 1 seconds 00:18:50.456 Verify: Yes 00:18:50.456 00:18:50.456 Running for 1 seconds... 00:18:50.456 00:18:50.456 Core,Thread Transfers Bandwidth Failed Miscompares 00:18:50.456 ------------------------------------------------------------------------------------ 00:18:50.456 0,1 2240/s 92 MiB/s 0 0 00:18:50.456 0,0 2176/s 89 MiB/s 0 0 00:18:50.456 ==================================================================================== 00:18:50.456 Total 4416/s 468 MiB/s 0 0' 00:18:50.456 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.456 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.456 21:32:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:50.456 21:32:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:18:50.456 21:32:11 -- accel/accel.sh@12 -- # build_accel_config 00:18:50.456 21:32:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:50.456 21:32:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:50.456 21:32:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:50.456 21:32:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:50.456 21:32:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:50.456 21:32:11 -- accel/accel.sh@41 -- # local IFS=, 00:18:50.456 21:32:11 -- accel/accel.sh@42 -- # jq -r . 00:18:50.456 [2024-07-11 21:32:11.214209] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:50.456 [2024-07-11 21:32:11.214314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69300 ] 00:18:50.456 [2024-07-11 21:32:11.349745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.715 [2024-07-11 21:32:11.453912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val=0x1 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val=decompress 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val='111250 bytes' 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.715 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.715 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.715 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val=software 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@23 -- # accel_module=software 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val=32 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val=32 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val=2 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val=Yes 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:50.716 21:32:11 -- accel/accel.sh@21 -- # val= 00:18:50.716 21:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # IFS=: 00:18:50.716 21:32:11 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@21 -- # val= 00:18:52.094 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@21 -- # val= 00:18:52.094 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@21 -- # val= 00:18:52.094 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@21 -- # val= 00:18:52.094 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@21 -- # val= 00:18:52.094 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@21 -- # val= 00:18:52.094 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@21 -- # val= 00:18:52.094 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:18:52.094 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:18:52.094 21:32:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:18:52.094 21:32:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:18:52.094 21:32:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:52.094 00:18:52.094 real 0m3.037s 00:18:52.094 user 0m2.592s 00:18:52.094 sys 0m0.236s 00:18:52.094 21:32:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.094 ************************************ 00:18:52.094 END TEST accel_deomp_full_mthread 00:18:52.094 ************************************ 00:18:52.094 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:18:52.094 21:32:12 -- accel/accel.sh@116 -- # [[ n == y ]] 00:18:52.094 21:32:12 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:18:52.094 21:32:12 -- accel/accel.sh@129 -- # build_accel_config 00:18:52.094 21:32:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:52.094 21:32:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:52.094 21:32:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:18:52.094 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:18:52.094 21:32:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:52.094 21:32:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:52.094 21:32:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:18:52.094 21:32:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:18:52.094 21:32:12 -- accel/accel.sh@41 -- # local IFS=, 00:18:52.094 21:32:12 -- accel/accel.sh@42 -- # jq -r . 00:18:52.094 ************************************ 00:18:52.094 START TEST accel_dif_functional_tests 00:18:52.094 ************************************ 00:18:52.094 21:32:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:18:52.094 [2024-07-11 21:32:12.829895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:52.094 [2024-07-11 21:32:12.830074] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69335 ] 00:18:52.094 [2024-07-11 21:32:12.976252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:52.351 [2024-07-11 21:32:13.075072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.351 [2024-07-11 21:32:13.075186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.351 [2024-07-11 21:32:13.075193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.351 00:18:52.351 00:18:52.351 CUnit - A unit testing framework for C - Version 2.1-3 00:18:52.351 http://cunit.sourceforge.net/ 00:18:52.351 00:18:52.351 00:18:52.351 Suite: accel_dif 00:18:52.351 Test: verify: DIF generated, GUARD check ...passed 00:18:52.351 Test: verify: DIF generated, APPTAG check ...passed 00:18:52.351 Test: verify: DIF generated, REFTAG check ...passed 00:18:52.351 Test: verify: DIF not generated, GUARD check ...[2024-07-11 21:32:13.165353] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:18:52.351 passed 00:18:52.351 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 21:32:13.165433] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:18:52.351 [2024-07-11 21:32:13.165472] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:18:52.351 passed 00:18:52.352 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 21:32:13.165629] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:18:52.352 [2024-07-11 21:32:13.165672] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:18:52.352 passed 00:18:52.352 Test: verify: APPTAG correct, APPTAG check ...passed 00:18:52.352 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:18:52.352 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:18:52.352 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:18:52.352 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-11 21:32:13.165706] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:18:52.352 [2024-07-11 21:32:13.165769] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:18:52.352 passed 00:18:52.352 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:18:52.352 Test: generate copy: DIF generated, GUARD check ...passed 00:18:52.352 Test: generate copy: DIF generated, APTTAG check ...passed 00:18:52.352 Test: generate copy: DIF generated, REFTAG check ...[2024-07-11 21:32:13.166041] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:18:52.352 passed 00:18:52.352 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:18:52.352 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:18:52.352 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:18:52.352 Test: generate copy: iovecs-len validate ...[2024-07-11 21:32:13.166566] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:18:52.352 Test: generate copy: buffer alignment validate ...passed 00:18:52.352 00:18:52.352 Run Summary: Type Total Ran Passed Failed Inactive 00:18:52.352 suites 1 1 n/a 0 0 00:18:52.352 tests 20 20 20 0 0 00:18:52.352 asserts 204 204 204 0 n/a 00:18:52.352 00:18:52.352 Elapsed time = 0.005 seconds 00:18:52.352 with block_size. 00:18:52.609 00:18:52.609 real 0m0.611s 00:18:52.609 user 0m0.799s 00:18:52.609 sys 0m0.154s 00:18:52.609 21:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.609 ************************************ 00:18:52.609 END TEST accel_dif_functional_tests 00:18:52.609 ************************************ 00:18:52.609 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:18:52.609 00:18:52.609 real 1m3.897s 00:18:52.609 user 1m8.001s 00:18:52.609 sys 0m6.326s 00:18:52.609 21:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.609 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:18:52.609 ************************************ 00:18:52.609 END TEST accel 00:18:52.609 ************************************ 00:18:52.609 21:32:13 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:18:52.609 21:32:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:52.609 21:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:52.609 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:18:52.609 ************************************ 00:18:52.609 START TEST accel_rpc 00:18:52.609 ************************************ 00:18:52.609 21:32:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:18:52.609 * Looking for test storage... 00:18:52.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:18:52.609 21:32:13 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:52.609 21:32:13 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69399 00:18:52.609 21:32:13 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:52.609 21:32:13 -- accel/accel_rpc.sh@15 -- # waitforlisten 69399 00:18:52.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.609 21:32:13 -- common/autotest_common.sh@819 -- # '[' -z 69399 ']' 00:18:52.609 21:32:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.609 21:32:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:52.609 21:32:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.609 21:32:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:52.609 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:18:52.867 [2024-07-11 21:32:13.598741] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:52.867 [2024-07-11 21:32:13.599050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69399 ] 00:18:52.867 [2024-07-11 21:32:13.742269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.125 [2024-07-11 21:32:13.848497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:53.125 [2024-07-11 21:32:13.848938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.690 21:32:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:53.690 21:32:14 -- common/autotest_common.sh@852 -- # return 0 00:18:53.690 21:32:14 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:18:53.690 21:32:14 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:18:53.690 21:32:14 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:18:53.690 21:32:14 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:18:53.690 21:32:14 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:18:53.690 21:32:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:53.690 21:32:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:53.690 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:53.690 ************************************ 00:18:53.690 START TEST accel_assign_opcode 00:18:53.691 ************************************ 00:18:53.691 21:32:14 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:18:53.691 21:32:14 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:18:53.691 21:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.691 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:53.691 [2024-07-11 21:32:14.617634] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:18:53.691 21:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.691 21:32:14 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:18:53.691 21:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.691 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:53.691 [2024-07-11 21:32:14.625633] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:18:53.691 21:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.691 21:32:14 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:18:53.691 21:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.691 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:53.949 21:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.949 21:32:14 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:18:53.949 21:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.949 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:53.949 21:32:14 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:18:53.949 21:32:14 -- accel/accel_rpc.sh@42 -- # grep software 00:18:53.949 21:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.949 software 00:18:53.949 00:18:53.949 real 0m0.288s 00:18:53.949 user 0m0.046s 00:18:53.949 sys 0m0.009s 00:18:54.206 ************************************ 00:18:54.206 END TEST accel_assign_opcode 00:18:54.206 ************************************ 00:18:54.207 21:32:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.207 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:54.207 21:32:14 -- accel/accel_rpc.sh@55 -- # killprocess 69399 00:18:54.207 21:32:14 -- common/autotest_common.sh@926 -- # '[' -z 69399 ']' 00:18:54.207 21:32:14 -- common/autotest_common.sh@930 -- # kill -0 69399 00:18:54.207 21:32:14 -- common/autotest_common.sh@931 -- # uname 00:18:54.207 21:32:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:54.207 21:32:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69399 00:18:54.207 killing process with pid 69399 00:18:54.207 21:32:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:54.207 21:32:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:54.207 21:32:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69399' 00:18:54.207 21:32:14 -- common/autotest_common.sh@945 -- # kill 69399 00:18:54.207 21:32:14 -- common/autotest_common.sh@950 -- # wait 69399 00:18:54.464 00:18:54.464 real 0m1.873s 00:18:54.464 user 0m1.968s 00:18:54.464 sys 0m0.452s 00:18:54.465 21:32:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.465 ************************************ 00:18:54.465 END TEST accel_rpc 00:18:54.465 ************************************ 00:18:54.465 21:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:54.465 21:32:15 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:54.465 21:32:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:54.465 21:32:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:54.465 21:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:54.465 ************************************ 00:18:54.465 START TEST app_cmdline 00:18:54.465 ************************************ 00:18:54.465 21:32:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:54.723 * Looking for test storage... 00:18:54.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:54.723 21:32:15 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:54.723 21:32:15 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69491 00:18:54.723 21:32:15 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:54.723 21:32:15 -- app/cmdline.sh@18 -- # waitforlisten 69491 00:18:54.723 21:32:15 -- common/autotest_common.sh@819 -- # '[' -z 69491 ']' 00:18:54.723 21:32:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.723 21:32:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:54.723 21:32:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.723 21:32:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:54.723 21:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:54.723 [2024-07-11 21:32:15.527958] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:54.723 [2024-07-11 21:32:15.528429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69491 ] 00:18:54.723 [2024-07-11 21:32:15.671088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.981 [2024-07-11 21:32:15.772138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:54.981 [2024-07-11 21:32:15.772318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.548 21:32:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:55.548 21:32:16 -- common/autotest_common.sh@852 -- # return 0 00:18:55.548 21:32:16 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:55.841 { 00:18:55.841 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:18:55.841 "fields": { 00:18:55.841 "major": 24, 00:18:55.841 "minor": 1, 00:18:55.841 "patch": 1, 00:18:55.841 "suffix": "-pre", 00:18:55.841 "commit": "4b94202c6" 00:18:55.841 } 00:18:55.841 } 00:18:55.841 21:32:16 -- app/cmdline.sh@22 -- # expected_methods=() 00:18:55.841 21:32:16 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:55.841 21:32:16 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:55.841 21:32:16 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:55.841 21:32:16 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:55.841 21:32:16 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:55.841 21:32:16 -- app/cmdline.sh@26 -- # sort 00:18:55.841 21:32:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.841 21:32:16 -- common/autotest_common.sh@10 -- # set +x 00:18:55.841 21:32:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.841 21:32:16 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:55.841 21:32:16 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:55.841 21:32:16 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:55.841 21:32:16 -- common/autotest_common.sh@640 -- # local es=0 00:18:55.841 21:32:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:55.841 21:32:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.841 21:32:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:55.841 21:32:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.841 21:32:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:55.841 21:32:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.841 21:32:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:55.841 21:32:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.841 21:32:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:55.841 21:32:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:56.105 request: 00:18:56.105 { 00:18:56.105 "method": "env_dpdk_get_mem_stats", 00:18:56.105 "req_id": 1 00:18:56.105 } 00:18:56.105 Got JSON-RPC error response 00:18:56.105 response: 00:18:56.105 { 00:18:56.105 "code": -32601, 00:18:56.105 "message": "Method not found" 00:18:56.105 } 00:18:56.105 21:32:17 -- common/autotest_common.sh@643 -- # es=1 00:18:56.105 21:32:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:56.105 21:32:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:56.105 21:32:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:56.105 21:32:17 -- app/cmdline.sh@1 -- # killprocess 69491 00:18:56.105 21:32:17 -- common/autotest_common.sh@926 -- # '[' -z 69491 ']' 00:18:56.105 21:32:17 -- common/autotest_common.sh@930 -- # kill -0 69491 00:18:56.105 21:32:17 -- common/autotest_common.sh@931 -- # uname 00:18:56.364 21:32:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:56.364 21:32:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69491 00:18:56.364 killing process with pid 69491 00:18:56.364 21:32:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:56.364 21:32:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:56.364 21:32:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69491' 00:18:56.364 21:32:17 -- common/autotest_common.sh@945 -- # kill 69491 00:18:56.364 21:32:17 -- common/autotest_common.sh@950 -- # wait 69491 00:18:56.623 00:18:56.623 real 0m2.076s 00:18:56.623 user 0m2.588s 00:18:56.623 sys 0m0.462s 00:18:56.623 21:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.623 21:32:17 -- common/autotest_common.sh@10 -- # set +x 00:18:56.623 ************************************ 00:18:56.623 END TEST app_cmdline 00:18:56.623 ************************************ 00:18:56.623 21:32:17 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:56.623 21:32:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:56.623 21:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:56.623 21:32:17 -- common/autotest_common.sh@10 -- # set +x 00:18:56.623 ************************************ 00:18:56.623 START TEST version 00:18:56.623 ************************************ 00:18:56.623 21:32:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:56.623 * Looking for test storage... 00:18:56.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:56.881 21:32:17 -- app/version.sh@17 -- # get_header_version major 00:18:56.881 21:32:17 -- app/version.sh@14 -- # cut -f2 00:18:56.881 21:32:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:56.881 21:32:17 -- app/version.sh@14 -- # tr -d '"' 00:18:56.881 21:32:17 -- app/version.sh@17 -- # major=24 00:18:56.881 21:32:17 -- app/version.sh@18 -- # get_header_version minor 00:18:56.881 21:32:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:56.881 21:32:17 -- app/version.sh@14 -- # cut -f2 00:18:56.881 21:32:17 -- app/version.sh@14 -- # tr -d '"' 00:18:56.881 21:32:17 -- app/version.sh@18 -- # minor=1 00:18:56.881 21:32:17 -- app/version.sh@19 -- # get_header_version patch 00:18:56.881 21:32:17 -- app/version.sh@14 -- # tr -d '"' 00:18:56.881 21:32:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:56.881 21:32:17 -- app/version.sh@14 -- # cut -f2 00:18:56.881 21:32:17 -- app/version.sh@19 -- # patch=1 00:18:56.881 21:32:17 -- app/version.sh@20 -- # get_header_version suffix 00:18:56.881 21:32:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:56.881 21:32:17 -- app/version.sh@14 -- # cut -f2 00:18:56.881 21:32:17 -- app/version.sh@14 -- # tr -d '"' 00:18:56.881 21:32:17 -- app/version.sh@20 -- # suffix=-pre 00:18:56.881 21:32:17 -- app/version.sh@22 -- # version=24.1 00:18:56.881 21:32:17 -- app/version.sh@25 -- # (( patch != 0 )) 00:18:56.881 21:32:17 -- app/version.sh@25 -- # version=24.1.1 00:18:56.881 21:32:17 -- app/version.sh@28 -- # version=24.1.1rc0 00:18:56.881 21:32:17 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:56.882 21:32:17 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:56.882 21:32:17 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:18:56.882 21:32:17 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:18:56.882 00:18:56.882 real 0m0.148s 00:18:56.882 user 0m0.081s 00:18:56.882 sys 0m0.098s 00:18:56.882 ************************************ 00:18:56.882 END TEST version 00:18:56.882 ************************************ 00:18:56.882 21:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.882 21:32:17 -- common/autotest_common.sh@10 -- # set +x 00:18:56.882 21:32:17 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:18:56.882 21:32:17 -- spdk/autotest.sh@204 -- # uname -s 00:18:56.882 21:32:17 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:18:56.882 21:32:17 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:18:56.882 21:32:17 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:18:56.882 21:32:17 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:18:56.882 21:32:17 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:56.882 21:32:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:56.882 21:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:56.882 21:32:17 -- common/autotest_common.sh@10 -- # set +x 00:18:56.882 ************************************ 00:18:56.882 START TEST spdk_dd 00:18:56.882 ************************************ 00:18:56.882 21:32:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:18:56.882 * Looking for test storage... 00:18:56.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:56.882 21:32:17 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:56.882 21:32:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.882 21:32:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.882 21:32:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.882 21:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.882 21:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.882 21:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.882 21:32:17 -- paths/export.sh@5 -- # export PATH 00:18:56.882 21:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.882 21:32:17 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:57.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:57.450 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:57.450 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:57.450 21:32:18 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:18:57.450 21:32:18 -- dd/dd.sh@11 -- # nvme_in_userspace 00:18:57.450 21:32:18 -- scripts/common.sh@311 -- # local bdf bdfs 00:18:57.450 21:32:18 -- scripts/common.sh@312 -- # local nvmes 00:18:57.450 21:32:18 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:18:57.450 21:32:18 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:57.450 21:32:18 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:18:57.450 21:32:18 -- scripts/common.sh@297 -- # local bdf= 00:18:57.450 21:32:18 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:18:57.450 21:32:18 -- scripts/common.sh@232 -- # local class 00:18:57.450 21:32:18 -- scripts/common.sh@233 -- # local subclass 00:18:57.450 21:32:18 -- scripts/common.sh@234 -- # local progif 00:18:57.450 21:32:18 -- scripts/common.sh@235 -- # printf %02x 1 00:18:57.450 21:32:18 -- scripts/common.sh@235 -- # class=01 00:18:57.450 21:32:18 -- scripts/common.sh@236 -- # printf %02x 8 00:18:57.450 21:32:18 -- scripts/common.sh@236 -- # subclass=08 00:18:57.450 21:32:18 -- scripts/common.sh@237 -- # printf %02x 2 00:18:57.450 21:32:18 -- scripts/common.sh@237 -- # progif=02 00:18:57.450 21:32:18 -- scripts/common.sh@239 -- # hash lspci 00:18:57.450 21:32:18 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:18:57.450 21:32:18 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:57.450 21:32:18 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:18:57.450 21:32:18 -- scripts/common.sh@242 -- # grep -i -- -p02 00:18:57.450 21:32:18 -- scripts/common.sh@244 -- # tr -d '"' 00:18:57.450 21:32:18 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:57.450 21:32:18 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:18:57.450 21:32:18 -- scripts/common.sh@15 -- # local i 00:18:57.450 21:32:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:18:57.450 21:32:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:57.450 21:32:18 -- scripts/common.sh@24 -- # return 0 00:18:57.450 21:32:18 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:18:57.450 21:32:18 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:57.450 21:32:18 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:18:57.450 21:32:18 -- scripts/common.sh@15 -- # local i 00:18:57.450 21:32:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:18:57.450 21:32:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:57.450 21:32:18 -- scripts/common.sh@24 -- # return 0 00:18:57.450 21:32:18 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:18:57.450 21:32:18 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:57.450 21:32:18 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:18:57.450 21:32:18 -- scripts/common.sh@322 -- # uname -s 00:18:57.450 21:32:18 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:57.450 21:32:18 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:57.450 21:32:18 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:57.450 21:32:18 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:18:57.450 21:32:18 -- scripts/common.sh@322 -- # uname -s 00:18:57.450 21:32:18 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:57.450 21:32:18 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:57.450 21:32:18 -- scripts/common.sh@327 -- # (( 2 )) 00:18:57.450 21:32:18 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:18:57.450 21:32:18 -- dd/dd.sh@13 -- # check_liburing 00:18:57.450 21:32:18 -- dd/common.sh@139 -- # local lib so 00:18:57.450 21:32:18 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:18:57.450 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.450 21:32:18 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:18:57.450 21:32:18 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.450 21:32:18 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:18:57.450 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.450 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:18:57.450 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.450 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:18:57.450 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.450 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:18:57.450 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.450 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:18:57.450 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.450 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:18:57.450 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.450 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:18:57.451 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.451 21:32:18 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:18:57.452 21:32:18 -- dd/common.sh@142 -- # read -r lib _ so _ 00:18:57.452 21:32:18 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:18:57.452 21:32:18 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:18:57.452 * spdk_dd linked to liburing 00:18:57.452 21:32:18 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:57.452 21:32:18 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:57.452 21:32:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:57.452 21:32:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:18:57.452 21:32:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:57.452 21:32:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:57.452 21:32:18 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:18:57.452 21:32:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:57.452 21:32:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:57.452 21:32:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:57.452 21:32:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:57.452 21:32:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:57.452 21:32:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:57.452 21:32:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:57.452 21:32:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:57.452 21:32:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:57.452 21:32:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:57.452 21:32:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:57.452 21:32:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:18:57.452 21:32:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:57.452 21:32:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:57.452 21:32:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:18:57.452 21:32:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:18:57.452 21:32:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:18:57.452 21:32:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:57.452 21:32:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:18:57.452 21:32:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:18:57.452 21:32:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:57.452 21:32:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:57.452 21:32:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:18:57.452 21:32:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:18:57.452 21:32:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:18:57.452 21:32:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:18:57.452 21:32:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:18:57.452 21:32:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:18:57.452 21:32:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:18:57.452 21:32:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:18:57.452 21:32:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:18:57.452 21:32:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:18:57.452 21:32:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:18:57.452 21:32:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:18:57.452 21:32:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:18:57.452 21:32:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:18:57.452 21:32:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:18:57.452 21:32:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:18:57.452 21:32:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:57.452 21:32:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:18:57.452 21:32:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:18:57.452 21:32:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:18:57.452 21:32:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:57.452 21:32:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:18:57.452 21:32:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:18:57.452 21:32:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:18:57.452 21:32:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:18:57.452 21:32:18 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:18:57.452 21:32:18 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:18:57.452 21:32:18 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:18:57.452 21:32:18 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:18:57.452 21:32:18 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:18:57.452 21:32:18 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:18:57.452 21:32:18 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:18:57.452 21:32:18 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:18:57.452 21:32:18 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:18:57.452 21:32:18 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:18:57.452 21:32:18 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:18:57.452 21:32:18 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:18:57.452 21:32:18 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:18:57.452 21:32:18 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:57.452 21:32:18 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:18:57.452 21:32:18 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:18:57.452 21:32:18 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:18:57.452 21:32:18 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:18:57.452 21:32:18 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:18:57.452 21:32:18 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:18:57.452 21:32:18 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:18:57.452 21:32:18 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:18:57.452 21:32:18 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:18:57.452 21:32:18 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:18:57.452 21:32:18 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:57.452 21:32:18 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:18:57.452 21:32:18 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:18:57.452 21:32:18 -- dd/common.sh@149 -- # [[ y != y ]] 00:18:57.452 21:32:18 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:18:57.452 21:32:18 -- dd/common.sh@156 -- # export liburing_in_use=1 00:18:57.452 21:32:18 -- dd/common.sh@156 -- # liburing_in_use=1 00:18:57.452 21:32:18 -- dd/common.sh@157 -- # return 0 00:18:57.452 21:32:18 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:18:57.452 21:32:18 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:18:57.452 21:32:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:57.452 21:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:57.452 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:18:57.452 ************************************ 00:18:57.452 START TEST spdk_dd_basic_rw 00:18:57.452 ************************************ 00:18:57.452 21:32:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:18:57.452 * Looking for test storage... 00:18:57.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:18:57.452 21:32:18 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.452 21:32:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.452 21:32:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.452 21:32:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.452 21:32:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.452 21:32:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.452 21:32:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.452 21:32:18 -- paths/export.sh@5 -- # export PATH 00:18:57.452 21:32:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.452 21:32:18 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:18:57.452 21:32:18 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:18:57.452 21:32:18 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:18:57.452 21:32:18 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:18:57.452 21:32:18 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:18:57.452 21:32:18 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:18:57.452 21:32:18 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:18:57.452 21:32:18 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:57.452 21:32:18 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:57.452 21:32:18 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:18:57.452 21:32:18 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:18:57.452 21:32:18 -- dd/common.sh@126 -- # mapfile -t id 00:18:57.452 21:32:18 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:18:57.713 21:32:18 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2085 Host Write Commands: 93 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:18:57.713 21:32:18 -- dd/common.sh@130 -- # lbaf=04 00:18:57.714 21:32:18 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2085 Host Write Commands: 93 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:18:57.714 21:32:18 -- dd/common.sh@132 -- # lbaf=4096 00:18:57.714 21:32:18 -- dd/common.sh@134 -- # echo 4096 00:18:57.714 21:32:18 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:18:57.714 21:32:18 -- dd/basic_rw.sh@96 -- # : 00:18:57.714 21:32:18 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:57.714 21:32:18 -- dd/basic_rw.sh@96 -- # gen_conf 00:18:57.714 21:32:18 -- dd/common.sh@31 -- # xtrace_disable 00:18:57.714 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:18:57.714 21:32:18 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:18:57.714 21:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:57.714 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:18:57.714 ************************************ 00:18:57.714 START TEST dd_bs_lt_native_bs 00:18:57.714 ************************************ 00:18:57.714 21:32:18 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:57.714 21:32:18 -- common/autotest_common.sh@640 -- # local es=0 00:18:57.714 21:32:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:57.714 21:32:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.714 21:32:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:57.714 21:32:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.714 21:32:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:57.714 21:32:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.714 21:32:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:57.714 21:32:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.714 21:32:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:57.714 21:32:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:18:57.714 { 00:18:57.714 "subsystems": [ 00:18:57.714 { 00:18:57.714 "subsystem": "bdev", 00:18:57.715 "config": [ 00:18:57.715 { 00:18:57.715 "params": { 00:18:57.715 "trtype": "pcie", 00:18:57.715 "traddr": "0000:00:06.0", 00:18:57.715 "name": "Nvme0" 00:18:57.715 }, 00:18:57.715 "method": "bdev_nvme_attach_controller" 00:18:57.715 }, 00:18:57.715 { 00:18:57.715 "method": "bdev_wait_for_examine" 00:18:57.715 } 00:18:57.715 ] 00:18:57.715 } 00:18:57.715 ] 00:18:57.715 } 00:18:57.715 [2024-07-11 21:32:18.609340] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:57.715 [2024-07-11 21:32:18.609461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69811 ] 00:18:57.973 [2024-07-11 21:32:18.753396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.973 [2024-07-11 21:32:18.856875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.232 [2024-07-11 21:32:19.018451] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:18:58.232 [2024-07-11 21:32:19.018551] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:58.232 [2024-07-11 21:32:19.143983] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:58.490 21:32:19 -- common/autotest_common.sh@643 -- # es=234 00:18:58.490 21:32:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:58.490 21:32:19 -- common/autotest_common.sh@652 -- # es=106 00:18:58.490 21:32:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:58.490 21:32:19 -- common/autotest_common.sh@660 -- # es=1 00:18:58.490 ************************************ 00:18:58.490 END TEST dd_bs_lt_native_bs 00:18:58.490 ************************************ 00:18:58.490 21:32:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:58.491 00:18:58.491 real 0m0.678s 00:18:58.491 user 0m0.463s 00:18:58.491 sys 0m0.165s 00:18:58.491 21:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.491 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:58.491 21:32:19 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:18:58.491 21:32:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:58.491 21:32:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:58.491 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:58.491 ************************************ 00:18:58.491 START TEST dd_rw 00:18:58.491 ************************************ 00:18:58.491 21:32:19 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:18:58.491 21:32:19 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:18:58.491 21:32:19 -- dd/basic_rw.sh@12 -- # local count size 00:18:58.491 21:32:19 -- dd/basic_rw.sh@13 -- # local qds bss 00:18:58.491 21:32:19 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:18:58.491 21:32:19 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:58.491 21:32:19 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:58.491 21:32:19 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:58.491 21:32:19 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:58.491 21:32:19 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:18:58.491 21:32:19 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:18:58.491 21:32:19 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:18:58.491 21:32:19 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:18:58.491 21:32:19 -- dd/basic_rw.sh@23 -- # count=15 00:18:58.491 21:32:19 -- dd/basic_rw.sh@24 -- # count=15 00:18:58.491 21:32:19 -- dd/basic_rw.sh@25 -- # size=61440 00:18:58.491 21:32:19 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:18:58.491 21:32:19 -- dd/common.sh@98 -- # xtrace_disable 00:18:58.491 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:59.058 21:32:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:18:59.058 21:32:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:18:59.058 21:32:19 -- dd/common.sh@31 -- # xtrace_disable 00:18:59.058 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:59.058 { 00:18:59.058 "subsystems": [ 00:18:59.058 { 00:18:59.058 "subsystem": "bdev", 00:18:59.058 "config": [ 00:18:59.058 { 00:18:59.058 "params": { 00:18:59.058 "trtype": "pcie", 00:18:59.058 "traddr": "0000:00:06.0", 00:18:59.058 "name": "Nvme0" 00:18:59.058 }, 00:18:59.058 "method": "bdev_nvme_attach_controller" 00:18:59.058 }, 00:18:59.058 { 00:18:59.058 "method": "bdev_wait_for_examine" 00:18:59.058 } 00:18:59.058 ] 00:18:59.058 } 00:18:59.058 ] 00:18:59.058 } 00:18:59.058 [2024-07-11 21:32:19.978326] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:59.058 [2024-07-11 21:32:19.978668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69842 ] 00:18:59.317 [2024-07-11 21:32:20.122014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.317 [2024-07-11 21:32:20.225030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.845  Copying: 60/60 [kB] (average 29 MBps) 00:18:59.845 00:18:59.845 21:32:20 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:18:59.845 21:32:20 -- dd/basic_rw.sh@37 -- # gen_conf 00:18:59.845 21:32:20 -- dd/common.sh@31 -- # xtrace_disable 00:18:59.845 21:32:20 -- common/autotest_common.sh@10 -- # set +x 00:18:59.845 [2024-07-11 21:32:20.655129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:59.845 [2024-07-11 21:32:20.655235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69860 ] 00:18:59.845 { 00:18:59.845 "subsystems": [ 00:18:59.845 { 00:18:59.845 "subsystem": "bdev", 00:18:59.845 "config": [ 00:18:59.845 { 00:18:59.845 "params": { 00:18:59.845 "trtype": "pcie", 00:18:59.845 "traddr": "0000:00:06.0", 00:18:59.845 "name": "Nvme0" 00:18:59.845 }, 00:18:59.845 "method": "bdev_nvme_attach_controller" 00:18:59.845 }, 00:18:59.845 { 00:18:59.845 "method": "bdev_wait_for_examine" 00:18:59.845 } 00:18:59.845 ] 00:18:59.845 } 00:18:59.845 ] 00:18:59.845 } 00:18:59.845 [2024-07-11 21:32:20.788917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.103 [2024-07-11 21:32:20.886629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.362  Copying: 60/60 [kB] (average 29 MBps) 00:19:00.362 00:19:00.362 21:32:21 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:00.362 21:32:21 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:19:00.362 21:32:21 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:00.362 21:32:21 -- dd/common.sh@11 -- # local nvme_ref= 00:19:00.362 21:32:21 -- dd/common.sh@12 -- # local size=61440 00:19:00.362 21:32:21 -- dd/common.sh@14 -- # local bs=1048576 00:19:00.362 21:32:21 -- dd/common.sh@15 -- # local count=1 00:19:00.362 21:32:21 -- dd/common.sh@18 -- # gen_conf 00:19:00.362 21:32:21 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:00.362 21:32:21 -- dd/common.sh@31 -- # xtrace_disable 00:19:00.362 21:32:21 -- common/autotest_common.sh@10 -- # set +x 00:19:00.621 [2024-07-11 21:32:21.331679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:00.621 [2024-07-11 21:32:21.331800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69873 ] 00:19:00.621 { 00:19:00.621 "subsystems": [ 00:19:00.621 { 00:19:00.621 "subsystem": "bdev", 00:19:00.621 "config": [ 00:19:00.621 { 00:19:00.621 "params": { 00:19:00.621 "trtype": "pcie", 00:19:00.621 "traddr": "0000:00:06.0", 00:19:00.621 "name": "Nvme0" 00:19:00.621 }, 00:19:00.621 "method": "bdev_nvme_attach_controller" 00:19:00.621 }, 00:19:00.621 { 00:19:00.621 "method": "bdev_wait_for_examine" 00:19:00.621 } 00:19:00.621 ] 00:19:00.621 } 00:19:00.621 ] 00:19:00.621 } 00:19:00.621 [2024-07-11 21:32:21.472019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.621 [2024-07-11 21:32:21.570104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.137  Copying: 1024/1024 [kB] (average 500 MBps) 00:19:01.137 00:19:01.137 21:32:21 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:01.137 21:32:21 -- dd/basic_rw.sh@23 -- # count=15 00:19:01.137 21:32:21 -- dd/basic_rw.sh@24 -- # count=15 00:19:01.137 21:32:21 -- dd/basic_rw.sh@25 -- # size=61440 00:19:01.137 21:32:21 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:19:01.137 21:32:21 -- dd/common.sh@98 -- # xtrace_disable 00:19:01.137 21:32:21 -- common/autotest_common.sh@10 -- # set +x 00:19:01.703 21:32:22 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:19:01.703 21:32:22 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:01.703 21:32:22 -- dd/common.sh@31 -- # xtrace_disable 00:19:01.703 21:32:22 -- common/autotest_common.sh@10 -- # set +x 00:19:01.703 [2024-07-11 21:32:22.639149] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:01.704 [2024-07-11 21:32:22.639269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69897 ] 00:19:01.704 { 00:19:01.704 "subsystems": [ 00:19:01.704 { 00:19:01.704 "subsystem": "bdev", 00:19:01.704 "config": [ 00:19:01.704 { 00:19:01.704 "params": { 00:19:01.704 "trtype": "pcie", 00:19:01.704 "traddr": "0000:00:06.0", 00:19:01.704 "name": "Nvme0" 00:19:01.704 }, 00:19:01.704 "method": "bdev_nvme_attach_controller" 00:19:01.704 }, 00:19:01.704 { 00:19:01.704 "method": "bdev_wait_for_examine" 00:19:01.704 } 00:19:01.704 ] 00:19:01.704 } 00:19:01.704 ] 00:19:01.704 } 00:19:01.962 [2024-07-11 21:32:22.778540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.962 [2024-07-11 21:32:22.879993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.495  Copying: 60/60 [kB] (average 58 MBps) 00:19:02.495 00:19:02.495 21:32:23 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:02.495 21:32:23 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:19:02.495 21:32:23 -- dd/common.sh@31 -- # xtrace_disable 00:19:02.495 21:32:23 -- common/autotest_common.sh@10 -- # set +x 00:19:02.495 [2024-07-11 21:32:23.304415] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:02.495 [2024-07-11 21:32:23.304538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69909 ] 00:19:02.495 { 00:19:02.495 "subsystems": [ 00:19:02.495 { 00:19:02.495 "subsystem": "bdev", 00:19:02.495 "config": [ 00:19:02.495 { 00:19:02.495 "params": { 00:19:02.495 "trtype": "pcie", 00:19:02.495 "traddr": "0000:00:06.0", 00:19:02.495 "name": "Nvme0" 00:19:02.495 }, 00:19:02.495 "method": "bdev_nvme_attach_controller" 00:19:02.495 }, 00:19:02.495 { 00:19:02.495 "method": "bdev_wait_for_examine" 00:19:02.495 } 00:19:02.495 ] 00:19:02.495 } 00:19:02.495 ] 00:19:02.495 } 00:19:02.495 [2024-07-11 21:32:23.439342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.754 [2024-07-11 21:32:23.536701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.013  Copying: 60/60 [kB] (average 58 MBps) 00:19:03.013 00:19:03.013 21:32:23 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:03.013 21:32:23 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:19:03.013 21:32:23 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:03.013 21:32:23 -- dd/common.sh@11 -- # local nvme_ref= 00:19:03.013 21:32:23 -- dd/common.sh@12 -- # local size=61440 00:19:03.013 21:32:23 -- dd/common.sh@14 -- # local bs=1048576 00:19:03.013 21:32:23 -- dd/common.sh@15 -- # local count=1 00:19:03.013 21:32:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:03.013 21:32:23 -- dd/common.sh@18 -- # gen_conf 00:19:03.013 21:32:23 -- dd/common.sh@31 -- # xtrace_disable 00:19:03.013 21:32:23 -- common/autotest_common.sh@10 -- # set +x 00:19:03.272 { 00:19:03.272 "subsystems": [ 00:19:03.272 { 00:19:03.272 "subsystem": "bdev", 00:19:03.272 "config": [ 00:19:03.272 { 00:19:03.272 "params": { 00:19:03.272 "trtype": "pcie", 00:19:03.272 "traddr": "0000:00:06.0", 00:19:03.272 "name": "Nvme0" 00:19:03.272 }, 00:19:03.272 "method": "bdev_nvme_attach_controller" 00:19:03.272 }, 00:19:03.272 { 00:19:03.272 "method": "bdev_wait_for_examine" 00:19:03.272 } 00:19:03.272 ] 00:19:03.272 } 00:19:03.272 ] 00:19:03.272 } 00:19:03.272 [2024-07-11 21:32:23.976916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:03.272 [2024-07-11 21:32:23.977032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69923 ] 00:19:03.272 [2024-07-11 21:32:24.114937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.272 [2024-07-11 21:32:24.210674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.788  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:03.788 00:19:03.788 21:32:24 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:19:03.788 21:32:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:03.788 21:32:24 -- dd/basic_rw.sh@23 -- # count=7 00:19:03.788 21:32:24 -- dd/basic_rw.sh@24 -- # count=7 00:19:03.788 21:32:24 -- dd/basic_rw.sh@25 -- # size=57344 00:19:03.788 21:32:24 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:19:03.788 21:32:24 -- dd/common.sh@98 -- # xtrace_disable 00:19:03.788 21:32:24 -- common/autotest_common.sh@10 -- # set +x 00:19:04.353 21:32:25 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:19:04.353 21:32:25 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:04.353 21:32:25 -- dd/common.sh@31 -- # xtrace_disable 00:19:04.353 21:32:25 -- common/autotest_common.sh@10 -- # set +x 00:19:04.353 [2024-07-11 21:32:25.230418] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:04.353 [2024-07-11 21:32:25.230546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69941 ] 00:19:04.353 { 00:19:04.353 "subsystems": [ 00:19:04.353 { 00:19:04.353 "subsystem": "bdev", 00:19:04.353 "config": [ 00:19:04.353 { 00:19:04.353 "params": { 00:19:04.353 "trtype": "pcie", 00:19:04.353 "traddr": "0000:00:06.0", 00:19:04.353 "name": "Nvme0" 00:19:04.353 }, 00:19:04.353 "method": "bdev_nvme_attach_controller" 00:19:04.353 }, 00:19:04.353 { 00:19:04.353 "method": "bdev_wait_for_examine" 00:19:04.353 } 00:19:04.353 ] 00:19:04.353 } 00:19:04.354 ] 00:19:04.354 } 00:19:04.612 [2024-07-11 21:32:25.371030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.612 [2024-07-11 21:32:25.484444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.128  Copying: 56/56 [kB] (average 54 MBps) 00:19:05.128 00:19:05.128 21:32:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:05.128 21:32:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:19:05.128 21:32:25 -- dd/common.sh@31 -- # xtrace_disable 00:19:05.128 21:32:25 -- common/autotest_common.sh@10 -- # set +x 00:19:05.128 [2024-07-11 21:32:25.935145] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:05.128 [2024-07-11 21:32:25.935272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69959 ] 00:19:05.128 { 00:19:05.128 "subsystems": [ 00:19:05.128 { 00:19:05.128 "subsystem": "bdev", 00:19:05.128 "config": [ 00:19:05.128 { 00:19:05.128 "params": { 00:19:05.128 "trtype": "pcie", 00:19:05.128 "traddr": "0000:00:06.0", 00:19:05.128 "name": "Nvme0" 00:19:05.128 }, 00:19:05.128 "method": "bdev_nvme_attach_controller" 00:19:05.128 }, 00:19:05.128 { 00:19:05.128 "method": "bdev_wait_for_examine" 00:19:05.128 } 00:19:05.128 ] 00:19:05.128 } 00:19:05.128 ] 00:19:05.128 } 00:19:05.128 [2024-07-11 21:32:26.077441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.386 [2024-07-11 21:32:26.173941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.644  Copying: 56/56 [kB] (average 54 MBps) 00:19:05.644 00:19:05.644 21:32:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:05.644 21:32:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:19:05.644 21:32:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:05.644 21:32:26 -- dd/common.sh@11 -- # local nvme_ref= 00:19:05.644 21:32:26 -- dd/common.sh@12 -- # local size=57344 00:19:05.644 21:32:26 -- dd/common.sh@14 -- # local bs=1048576 00:19:05.644 21:32:26 -- dd/common.sh@15 -- # local count=1 00:19:05.644 21:32:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:05.644 21:32:26 -- dd/common.sh@18 -- # gen_conf 00:19:05.644 21:32:26 -- dd/common.sh@31 -- # xtrace_disable 00:19:05.644 21:32:26 -- common/autotest_common.sh@10 -- # set +x 00:19:05.903 [2024-07-11 21:32:26.613893] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:05.903 [2024-07-11 21:32:26.614010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69978 ] 00:19:05.903 { 00:19:05.903 "subsystems": [ 00:19:05.903 { 00:19:05.903 "subsystem": "bdev", 00:19:05.903 "config": [ 00:19:05.903 { 00:19:05.903 "params": { 00:19:05.903 "trtype": "pcie", 00:19:05.903 "traddr": "0000:00:06.0", 00:19:05.903 "name": "Nvme0" 00:19:05.903 }, 00:19:05.903 "method": "bdev_nvme_attach_controller" 00:19:05.903 }, 00:19:05.903 { 00:19:05.903 "method": "bdev_wait_for_examine" 00:19:05.903 } 00:19:05.903 ] 00:19:05.903 } 00:19:05.903 ] 00:19:05.903 } 00:19:05.903 [2024-07-11 21:32:26.756346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.162 [2024-07-11 21:32:26.857451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.421  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:06.421 00:19:06.421 21:32:27 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:06.421 21:32:27 -- dd/basic_rw.sh@23 -- # count=7 00:19:06.421 21:32:27 -- dd/basic_rw.sh@24 -- # count=7 00:19:06.421 21:32:27 -- dd/basic_rw.sh@25 -- # size=57344 00:19:06.421 21:32:27 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:19:06.421 21:32:27 -- dd/common.sh@98 -- # xtrace_disable 00:19:06.421 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:19:06.988 21:32:27 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:19:06.988 21:32:27 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:06.988 21:32:27 -- dd/common.sh@31 -- # xtrace_disable 00:19:06.988 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:19:06.988 { 00:19:06.988 "subsystems": [ 00:19:06.988 { 00:19:06.988 "subsystem": "bdev", 00:19:06.988 "config": [ 00:19:06.988 { 00:19:06.988 "params": { 00:19:06.988 "trtype": "pcie", 00:19:06.988 "traddr": "0000:00:06.0", 00:19:06.988 "name": "Nvme0" 00:19:06.988 }, 00:19:06.988 "method": "bdev_nvme_attach_controller" 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "method": "bdev_wait_for_examine" 00:19:06.988 } 00:19:06.988 ] 00:19:06.988 } 00:19:06.988 ] 00:19:06.988 } 00:19:06.988 [2024-07-11 21:32:27.921237] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:06.988 [2024-07-11 21:32:27.921379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69996 ] 00:19:07.246 [2024-07-11 21:32:28.065736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.246 [2024-07-11 21:32:28.178167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.763  Copying: 56/56 [kB] (average 54 MBps) 00:19:07.763 00:19:07.763 21:32:28 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:07.763 21:32:28 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:19:07.763 21:32:28 -- dd/common.sh@31 -- # xtrace_disable 00:19:07.763 21:32:28 -- common/autotest_common.sh@10 -- # set +x 00:19:07.763 [2024-07-11 21:32:28.649844] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:07.763 [2024-07-11 21:32:28.649966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70014 ] 00:19:07.763 { 00:19:07.763 "subsystems": [ 00:19:07.763 { 00:19:07.763 "subsystem": "bdev", 00:19:07.763 "config": [ 00:19:07.763 { 00:19:07.763 "params": { 00:19:07.763 "trtype": "pcie", 00:19:07.763 "traddr": "0000:00:06.0", 00:19:07.763 "name": "Nvme0" 00:19:07.763 }, 00:19:07.763 "method": "bdev_nvme_attach_controller" 00:19:07.763 }, 00:19:07.763 { 00:19:07.763 "method": "bdev_wait_for_examine" 00:19:07.763 } 00:19:07.763 ] 00:19:07.763 } 00:19:07.763 ] 00:19:07.763 } 00:19:08.022 [2024-07-11 21:32:28.788974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.022 [2024-07-11 21:32:28.883686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.538  Copying: 56/56 [kB] (average 54 MBps) 00:19:08.538 00:19:08.538 21:32:29 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:08.538 21:32:29 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:19:08.538 21:32:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:08.538 21:32:29 -- dd/common.sh@11 -- # local nvme_ref= 00:19:08.538 21:32:29 -- dd/common.sh@12 -- # local size=57344 00:19:08.538 21:32:29 -- dd/common.sh@14 -- # local bs=1048576 00:19:08.538 21:32:29 -- dd/common.sh@15 -- # local count=1 00:19:08.538 21:32:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:08.538 21:32:29 -- dd/common.sh@18 -- # gen_conf 00:19:08.538 21:32:29 -- dd/common.sh@31 -- # xtrace_disable 00:19:08.538 21:32:29 -- common/autotest_common.sh@10 -- # set +x 00:19:08.538 [2024-07-11 21:32:29.301387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:08.539 [2024-07-11 21:32:29.301498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70022 ] 00:19:08.539 { 00:19:08.539 "subsystems": [ 00:19:08.539 { 00:19:08.539 "subsystem": "bdev", 00:19:08.539 "config": [ 00:19:08.539 { 00:19:08.539 "params": { 00:19:08.539 "trtype": "pcie", 00:19:08.539 "traddr": "0000:00:06.0", 00:19:08.539 "name": "Nvme0" 00:19:08.539 }, 00:19:08.539 "method": "bdev_nvme_attach_controller" 00:19:08.539 }, 00:19:08.539 { 00:19:08.539 "method": "bdev_wait_for_examine" 00:19:08.539 } 00:19:08.539 ] 00:19:08.539 } 00:19:08.539 ] 00:19:08.539 } 00:19:08.539 [2024-07-11 21:32:29.437365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.797 [2024-07-11 21:32:29.529460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.054  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:09.054 00:19:09.054 21:32:29 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:19:09.055 21:32:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:09.055 21:32:29 -- dd/basic_rw.sh@23 -- # count=3 00:19:09.055 21:32:29 -- dd/basic_rw.sh@24 -- # count=3 00:19:09.055 21:32:29 -- dd/basic_rw.sh@25 -- # size=49152 00:19:09.055 21:32:29 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:19:09.055 21:32:29 -- dd/common.sh@98 -- # xtrace_disable 00:19:09.055 21:32:29 -- common/autotest_common.sh@10 -- # set +x 00:19:09.621 21:32:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:19:09.621 21:32:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:09.621 21:32:30 -- dd/common.sh@31 -- # xtrace_disable 00:19:09.621 21:32:30 -- common/autotest_common.sh@10 -- # set +x 00:19:09.621 [2024-07-11 21:32:30.469100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:09.621 [2024-07-11 21:32:30.469205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70040 ] 00:19:09.621 { 00:19:09.621 "subsystems": [ 00:19:09.621 { 00:19:09.621 "subsystem": "bdev", 00:19:09.621 "config": [ 00:19:09.621 { 00:19:09.621 "params": { 00:19:09.621 "trtype": "pcie", 00:19:09.621 "traddr": "0000:00:06.0", 00:19:09.621 "name": "Nvme0" 00:19:09.621 }, 00:19:09.621 "method": "bdev_nvme_attach_controller" 00:19:09.621 }, 00:19:09.621 { 00:19:09.621 "method": "bdev_wait_for_examine" 00:19:09.621 } 00:19:09.621 ] 00:19:09.621 } 00:19:09.621 ] 00:19:09.621 } 00:19:09.879 [2024-07-11 21:32:30.608278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.879 [2024-07-11 21:32:30.702122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.137  Copying: 48/48 [kB] (average 46 MBps) 00:19:10.137 00:19:10.396 21:32:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:10.396 21:32:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:19:10.396 21:32:31 -- dd/common.sh@31 -- # xtrace_disable 00:19:10.396 21:32:31 -- common/autotest_common.sh@10 -- # set +x 00:19:10.396 [2024-07-11 21:32:31.127898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:10.396 [2024-07-11 21:32:31.127986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70058 ] 00:19:10.396 { 00:19:10.396 "subsystems": [ 00:19:10.396 { 00:19:10.396 "subsystem": "bdev", 00:19:10.396 "config": [ 00:19:10.396 { 00:19:10.396 "params": { 00:19:10.396 "trtype": "pcie", 00:19:10.396 "traddr": "0000:00:06.0", 00:19:10.396 "name": "Nvme0" 00:19:10.396 }, 00:19:10.396 "method": "bdev_nvme_attach_controller" 00:19:10.396 }, 00:19:10.396 { 00:19:10.396 "method": "bdev_wait_for_examine" 00:19:10.396 } 00:19:10.396 ] 00:19:10.396 } 00:19:10.396 ] 00:19:10.396 } 00:19:10.396 [2024-07-11 21:32:31.260849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.653 [2024-07-11 21:32:31.352118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.912  Copying: 48/48 [kB] (average 46 MBps) 00:19:10.912 00:19:10.912 21:32:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:10.912 21:32:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:19:10.912 21:32:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:10.912 21:32:31 -- dd/common.sh@11 -- # local nvme_ref= 00:19:10.912 21:32:31 -- dd/common.sh@12 -- # local size=49152 00:19:10.912 21:32:31 -- dd/common.sh@14 -- # local bs=1048576 00:19:10.912 21:32:31 -- dd/common.sh@15 -- # local count=1 00:19:10.912 21:32:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:10.912 21:32:31 -- dd/common.sh@18 -- # gen_conf 00:19:10.912 21:32:31 -- dd/common.sh@31 -- # xtrace_disable 00:19:10.912 21:32:31 -- common/autotest_common.sh@10 -- # set +x 00:19:10.912 [2024-07-11 21:32:31.801680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:10.912 [2024-07-11 21:32:31.801779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:19:10.912 { 00:19:10.912 "subsystems": [ 00:19:10.912 { 00:19:10.912 "subsystem": "bdev", 00:19:10.912 "config": [ 00:19:10.912 { 00:19:10.912 "params": { 00:19:10.912 "trtype": "pcie", 00:19:10.912 "traddr": "0000:00:06.0", 00:19:10.912 "name": "Nvme0" 00:19:10.912 }, 00:19:10.912 "method": "bdev_nvme_attach_controller" 00:19:10.912 }, 00:19:10.912 { 00:19:10.912 "method": "bdev_wait_for_examine" 00:19:10.912 } 00:19:10.912 ] 00:19:10.912 } 00:19:10.912 ] 00:19:10.912 } 00:19:11.170 [2024-07-11 21:32:31.941676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.170 [2024-07-11 21:32:32.033354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.688  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:11.688 00:19:11.688 21:32:32 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:19:11.688 21:32:32 -- dd/basic_rw.sh@23 -- # count=3 00:19:11.688 21:32:32 -- dd/basic_rw.sh@24 -- # count=3 00:19:11.688 21:32:32 -- dd/basic_rw.sh@25 -- # size=49152 00:19:11.688 21:32:32 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:19:11.688 21:32:32 -- dd/common.sh@98 -- # xtrace_disable 00:19:11.688 21:32:32 -- common/autotest_common.sh@10 -- # set +x 00:19:12.264 21:32:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:19:12.264 21:32:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:19:12.264 21:32:32 -- dd/common.sh@31 -- # xtrace_disable 00:19:12.264 21:32:32 -- common/autotest_common.sh@10 -- # set +x 00:19:12.264 [2024-07-11 21:32:32.999843] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:12.264 [2024-07-11 21:32:32.999999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70095 ] 00:19:12.264 { 00:19:12.264 "subsystems": [ 00:19:12.264 { 00:19:12.264 "subsystem": "bdev", 00:19:12.264 "config": [ 00:19:12.264 { 00:19:12.264 "params": { 00:19:12.264 "trtype": "pcie", 00:19:12.264 "traddr": "0000:00:06.0", 00:19:12.265 "name": "Nvme0" 00:19:12.265 }, 00:19:12.265 "method": "bdev_nvme_attach_controller" 00:19:12.265 }, 00:19:12.265 { 00:19:12.265 "method": "bdev_wait_for_examine" 00:19:12.265 } 00:19:12.265 ] 00:19:12.265 } 00:19:12.265 ] 00:19:12.265 } 00:19:12.265 [2024-07-11 21:32:33.147168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.524 [2024-07-11 21:32:33.249924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.781  Copying: 48/48 [kB] (average 46 MBps) 00:19:12.781 00:19:12.781 21:32:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:19:12.781 21:32:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:19:12.781 21:32:33 -- dd/common.sh@31 -- # xtrace_disable 00:19:12.781 21:32:33 -- common/autotest_common.sh@10 -- # set +x 00:19:12.781 { 00:19:12.781 "subsystems": [ 00:19:12.781 { 00:19:12.781 "subsystem": "bdev", 00:19:12.781 "config": [ 00:19:12.781 { 00:19:12.781 "params": { 00:19:12.781 "trtype": "pcie", 00:19:12.781 "traddr": "0000:00:06.0", 00:19:12.781 "name": "Nvme0" 00:19:12.781 }, 00:19:12.781 "method": "bdev_nvme_attach_controller" 00:19:12.781 }, 00:19:12.781 { 00:19:12.781 "method": "bdev_wait_for_examine" 00:19:12.781 } 00:19:12.781 ] 00:19:12.781 } 00:19:12.781 ] 00:19:12.781 } 00:19:12.781 [2024-07-11 21:32:33.722786] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:12.781 [2024-07-11 21:32:33.722914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70108 ] 00:19:13.037 [2024-07-11 21:32:33.868249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.037 [2024-07-11 21:32:33.967227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.552  Copying: 48/48 [kB] (average 46 MBps) 00:19:13.552 00:19:13.552 21:32:34 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:13.552 21:32:34 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:19:13.552 21:32:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:13.552 21:32:34 -- dd/common.sh@11 -- # local nvme_ref= 00:19:13.552 21:32:34 -- dd/common.sh@12 -- # local size=49152 00:19:13.552 21:32:34 -- dd/common.sh@14 -- # local bs=1048576 00:19:13.552 21:32:34 -- dd/common.sh@15 -- # local count=1 00:19:13.552 21:32:34 -- dd/common.sh@18 -- # gen_conf 00:19:13.552 21:32:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:13.552 21:32:34 -- dd/common.sh@31 -- # xtrace_disable 00:19:13.552 21:32:34 -- common/autotest_common.sh@10 -- # set +x 00:19:13.552 { 00:19:13.552 "subsystems": [ 00:19:13.552 { 00:19:13.552 "subsystem": "bdev", 00:19:13.552 "config": [ 00:19:13.552 { 00:19:13.552 "params": { 00:19:13.552 "trtype": "pcie", 00:19:13.552 "traddr": "0000:00:06.0", 00:19:13.552 "name": "Nvme0" 00:19:13.552 }, 00:19:13.552 "method": "bdev_nvme_attach_controller" 00:19:13.552 }, 00:19:13.552 { 00:19:13.552 "method": "bdev_wait_for_examine" 00:19:13.552 } 00:19:13.552 ] 00:19:13.552 } 00:19:13.552 ] 00:19:13.552 } 00:19:13.552 [2024-07-11 21:32:34.415826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:13.552 [2024-07-11 21:32:34.416309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70121 ] 00:19:13.810 [2024-07-11 21:32:34.559533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.810 [2024-07-11 21:32:34.665542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.326  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:14.326 00:19:14.326 00:19:14.326 real 0m15.783s 00:19:14.326 user 0m11.464s 00:19:14.326 sys 0m3.164s 00:19:14.326 21:32:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.326 21:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:14.326 ************************************ 00:19:14.326 END TEST dd_rw 00:19:14.326 ************************************ 00:19:14.326 21:32:35 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:19:14.326 21:32:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:14.326 21:32:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:14.326 21:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:14.326 ************************************ 00:19:14.326 START TEST dd_rw_offset 00:19:14.326 ************************************ 00:19:14.326 21:32:35 -- common/autotest_common.sh@1104 -- # basic_offset 00:19:14.326 21:32:35 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:19:14.326 21:32:35 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:19:14.326 21:32:35 -- dd/common.sh@98 -- # xtrace_disable 00:19:14.326 21:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:14.326 21:32:35 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:19:14.326 21:32:35 -- dd/basic_rw.sh@56 -- # data=6t95v89cihv7p1kk66fe1evuarzyzocge24e8ux9edxmau4v4z6j24cemu63ptw403u2bkpzhac4nx41qi21wh078pgwr5vukpt2brco58am92wur7pw1l13q9o2832o6sock5ihn9caerhfvop5jnws4z4my84y88ldr5g0pu40az8ttmq17buk9uygsechz398qhbpr0tt0chtw3wfsrriu1nr2neg3p8avf3s9gdle4n557cg2t2yywyxzj4n1kf5zc9g46ej6269cwoadg33fmhoorrszd6wcqfij2dw1pny3evy0b1jnk88rrsxts9jqecthxq8vbvhzkn8qe85yyjg1f68spzfmcxfhw1sh71w2v98b9xq302jri1vneed6ephlflvs6orsq0s67m2jok8ddifzb8xhygt531s5u9ln5ownz13wi2fqalhmw9d0xw843fjhzfzipabr7012q353eq69msgki9a0u92kg5dlelw4gltu28iu5c0xynzsztaspb0keul7onqzjn4iufi1k1gaqsaqzwv2tqlny076651ea5ukfl8n8y0yunr3ja5p86w9ukojrru4rk235tsctxxizipznvk2ts7bejmzbkjjmjirlrzrh7r1s41hk4sam1ra5tlbfc1qq5ljv4pcl5vikp9h94a2kw2eawad7km5sxu4mlgjktib67vmox3x9blzlw9o30onck0pu8bwpvz8z89gdcthbsgo4q5xaacbode3e7y101r4z35ta6enp743tnx1hpuwehp2vrpjy2b364471uw9curkrmh06goes5ihx38gofcsp896vc544ahbe8lyveub60w58e6nljaebol232u3mdxr1szcqbi6nztunj6ase4h1vsqp2c3uqdajtlghxsnezesd5z9pp2m56kfhz6ujitzo9lrclv7k670yskbjs8iz89rteujrlqkrwgzwygwp4wwtajdg99qhxwro7omueypo4bglxit186tmvvrorjoofdctv48p6lqtorp6hwh2aszk3hphhpz5rek6mo3ed9tlmqk63zdpralkyd1w4op87spruiepg6wuz04mvy2oq89veuxzzsa32x79kw1arz8k6huobnimmpz17wfex7jwo5q4mrsda5s5vxvcipzc6yizybza7ozwog40s6lyc98obx9vfv7gy5tta8od5dxwlygjsejh433ec9vaywsanwxqylyppd9pzhojhba3xfgkyphqent1ia9z280esyd0uofwrwrt2cwaayi6pqe4hnyet61cewjx5ib2me65yeqnblykx0s7namvcdsj0g6gcc0dug6gb9t728hekaetjyxvw67za2wnhtyy3vfhlpw8bwmqjq1fx3n27v0ew68ivjweeav1onkaka6i0g3mr7afcpp30r2354w1u59ppklxkp9kfx4kcs9iji09qyus9d8wztkqk6cdd97omm1uuy2vy85xnqzgvj8h37gs7d2q0nn35wxc7sgvix1vx9d5jsxmyz82ysq1lv7e1a4b1j32jubb5uv7y5aqsm568pqdvq929myvnxxd3y439visfvatg8ll0azffm2w8tqjat0f0rz6abkbi5bkf9xxezomvkyjubqc9hspq5sppvyn0dw83uauy31dvn6iir5fgrozye95nt10fsfpc37dk2pdiudef06xjag7q3hjk2ua54xlhxxxsqneh2nb7qy3zcu2v610ul7v2dso1blibta8o099cavxuzzhq6fsz2b27x3z0b3o26g52f9be7zlygxcdod266gsoeoqjo0ld1e4qq9arqo3yxs0q2lu4sqb62zad0m7hrnpcdrx4lte877y4erlkr7r8zf4klyroqexnx3sl8wybrfnuattidpgcajt2x52htddjqrcq2yn4pgwggbozyg95yxius815io70f9sqihy8f5gtlaznnehohtldv4o524nsr8trx1kxcvwopn4qyppjfuy2h31rr4t0eorvzvte29xrzzd1ggot6gl62ut21wx2khn9dnmfkbhcp3w8butxd66hymw8hbwslapn3750tow8jcn1fep8t7qjtbyjw9ueo8adsxffyxj6vy5hhvfg2oma67h81x4lvvc5qx1ev7cam208ixey35c72jv7whv5g2jtimmc2on53mco8dvlwjegs9tm02k1yz4ij7zr4qreq9koclmals3fwsok98367iuwyyd23xfp4wg1q0o11i7dt2yu2vpi6oimig3xlx9d8lpdc6meux15i8l4tdshaxb8tp8ya6nh85f3llx2ae5px1b1ymrejmy0ukby1tiionqr2ztjif2q3gjiye5qun0380mn5jswhq8oabtdr8dq14c37t5ln59az4oilppp5ycyzdogdo377wephhp6ietvikc223h7o2p0bptunckq71pxiztfoyi48ixvlaufe3f5vnncqdkq5v6jxchj0gwizwuwqq2ka721uhz89soyfospuuv07jaacbfx204nhd0rj8070olyxu4n3vcecimxr5kfjubrvvojhga35t2c56ic7a67uzdr57cmajezc84eevvab7d6kubrevmi9e11fs168xywctgvgr6nma1wy91i7hla197e91md914dift6ztfhlqbvx7xk05sid69wikxecrogkcandyvpph86x8d2gg8xbq9glesrwqdbwgm6xmwucnr8uuuuoa7gu52cuc6dbcrjmipip7h3v43wsv8w3naz1y77kev4fyeiarbw1b10xa957k0zth168esp1jma931y7svhfsd2k20r44h15f7txbe4h1o2oezyh3y9a3x97xvsps8bymm5n7588li0o6qnnsxr4apgtyk8hnq1j5d5pnmaa2da7ayf35to81ddlrtpm1g213hp7xki9gt8yw18zu5uyps724ljz7wmk0ow9a5681tbz5f1wki8q1yf5yeowa5ghr76q3675sdfzle6svow12uble9m3f77jes5xc0nbfuqlmzj2xgyzb64n051feacgwwmqfxwvqsnfa76wqinngy29kmfuk31gnd2cqqnhn9vkrhxjgx27oqahykzme3cr5oyzxmyk7af9kqz6cepsxrentlpuudjq740c9nggnqkica7kxblc7om2kty55ancad09wxlm9u2zxpqj85vcpgsol1qp3bw1jan7arjyeddetq9o68dyzfmqf5zjqhfya1xn5frw02ja0epqkh4vwnes1myznu7ad9bu4qk1adoc8g5c1kxcn510l5ssa8xp86p5ymkmesf8hu8s0vyvsfq58yje0kjyzizv1ffn5dl7u6sc3mot0fufq96l2qrqczlm1mlx5c6n41o7qxlf3fpryc29ivg8k6u8rk50t05nz3h2cjdo0jkk2raohvhn2g2qdd8w63pc1o0dbk2f0eh7u3qoa5xs2t7nfdlotvxjxs8f2ho0dntdh1zhuzyuwuoka9zsorsfxvsrk0n5uaaxnylwax1xgxtctau1yn0107vvm8z6v44z4xas1yv8krlrgnfy8ltsae9zy9zt209kdtp23ti8yheh5spcz4lpx00wix8el0pv2i1vfycxr0pirjmegfyf19r7rwsyf0wosr8nbjo9x4mrk2ocx95gfobwcfd2l1tpe2rucjg1thpqycnd446ei2lwd2xzdg3loe5grhlyy86so0z49sjyiv65ig6q94aa02yieiuc8ak12s4v7p3yxlt1aib5yv3i6ue5o0qz85rr52sravw1s7auletql5m0s1wir44jcuvqu8lymvgbg2f4ho9q7f04qdjywltutq4vx6dgrwijvavqm3jjxj52o8y7snrv8syfnpijeksfshsp4nz31lhbpx14ii89151ueretfbivd9eo9wqt6otvo9rn7g7rswwdqfz1rkpqpa6dh73d7h52w0bazkslwztkdoa09f71fqja9ls83ofq80v39rjt9aiq57fkiezy5pvgbbqmw3jy8jzmaexpc2z4np5sh97d3ifsh023 00:19:14.326 21:32:35 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:19:14.326 21:32:35 -- dd/basic_rw.sh@59 -- # gen_conf 00:19:14.326 21:32:35 -- dd/common.sh@31 -- # xtrace_disable 00:19:14.326 21:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:14.326 { 00:19:14.326 "subsystems": [ 00:19:14.326 { 00:19:14.326 "subsystem": "bdev", 00:19:14.326 "config": [ 00:19:14.326 { 00:19:14.326 "params": { 00:19:14.326 "trtype": "pcie", 00:19:14.326 "traddr": "0000:00:06.0", 00:19:14.326 "name": "Nvme0" 00:19:14.326 }, 00:19:14.326 "method": "bdev_nvme_attach_controller" 00:19:14.326 }, 00:19:14.326 { 00:19:14.326 "method": "bdev_wait_for_examine" 00:19:14.326 } 00:19:14.326 ] 00:19:14.326 } 00:19:14.326 ] 00:19:14.326 } 00:19:14.326 [2024-07-11 21:32:35.238888] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:14.326 [2024-07-11 21:32:35.239308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70156 ] 00:19:14.584 [2024-07-11 21:32:35.386290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.584 [2024-07-11 21:32:35.491653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.099  Copying: 4096/4096 [B] (average 4000 kBps) 00:19:15.099 00:19:15.099 21:32:35 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:19:15.099 21:32:35 -- dd/basic_rw.sh@65 -- # gen_conf 00:19:15.099 21:32:35 -- dd/common.sh@31 -- # xtrace_disable 00:19:15.099 21:32:35 -- common/autotest_common.sh@10 -- # set +x 00:19:15.099 [2024-07-11 21:32:35.953176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:15.099 [2024-07-11 21:32:35.953296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70174 ] 00:19:15.099 { 00:19:15.099 "subsystems": [ 00:19:15.099 { 00:19:15.099 "subsystem": "bdev", 00:19:15.099 "config": [ 00:19:15.099 { 00:19:15.099 "params": { 00:19:15.099 "trtype": "pcie", 00:19:15.099 "traddr": "0000:00:06.0", 00:19:15.099 "name": "Nvme0" 00:19:15.099 }, 00:19:15.099 "method": "bdev_nvme_attach_controller" 00:19:15.099 }, 00:19:15.099 { 00:19:15.099 "method": "bdev_wait_for_examine" 00:19:15.099 } 00:19:15.099 ] 00:19:15.099 } 00:19:15.099 ] 00:19:15.099 } 00:19:15.358 [2024-07-11 21:32:36.092944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.358 [2024-07-11 21:32:36.199045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.874  Copying: 4096/4096 [B] (average 4000 kBps) 00:19:15.874 00:19:15.874 21:32:36 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:19:15.874 ************************************ 00:19:15.874 END TEST dd_rw_offset 00:19:15.875 21:32:36 -- dd/basic_rw.sh@72 -- # [[ 6t95v89cihv7p1kk66fe1evuarzyzocge24e8ux9edxmau4v4z6j24cemu63ptw403u2bkpzhac4nx41qi21wh078pgwr5vukpt2brco58am92wur7pw1l13q9o2832o6sock5ihn9caerhfvop5jnws4z4my84y88ldr5g0pu40az8ttmq17buk9uygsechz398qhbpr0tt0chtw3wfsrriu1nr2neg3p8avf3s9gdle4n557cg2t2yywyxzj4n1kf5zc9g46ej6269cwoadg33fmhoorrszd6wcqfij2dw1pny3evy0b1jnk88rrsxts9jqecthxq8vbvhzkn8qe85yyjg1f68spzfmcxfhw1sh71w2v98b9xq302jri1vneed6ephlflvs6orsq0s67m2jok8ddifzb8xhygt531s5u9ln5ownz13wi2fqalhmw9d0xw843fjhzfzipabr7012q353eq69msgki9a0u92kg5dlelw4gltu28iu5c0xynzsztaspb0keul7onqzjn4iufi1k1gaqsaqzwv2tqlny076651ea5ukfl8n8y0yunr3ja5p86w9ukojrru4rk235tsctxxizipznvk2ts7bejmzbkjjmjirlrzrh7r1s41hk4sam1ra5tlbfc1qq5ljv4pcl5vikp9h94a2kw2eawad7km5sxu4mlgjktib67vmox3x9blzlw9o30onck0pu8bwpvz8z89gdcthbsgo4q5xaacbode3e7y101r4z35ta6enp743tnx1hpuwehp2vrpjy2b364471uw9curkrmh06goes5ihx38gofcsp896vc544ahbe8lyveub60w58e6nljaebol232u3mdxr1szcqbi6nztunj6ase4h1vsqp2c3uqdajtlghxsnezesd5z9pp2m56kfhz6ujitzo9lrclv7k670yskbjs8iz89rteujrlqkrwgzwygwp4wwtajdg99qhxwro7omueypo4bglxit186tmvvrorjoofdctv48p6lqtorp6hwh2aszk3hphhpz5rek6mo3ed9tlmqk63zdpralkyd1w4op87spruiepg6wuz04mvy2oq89veuxzzsa32x79kw1arz8k6huobnimmpz17wfex7jwo5q4mrsda5s5vxvcipzc6yizybza7ozwog40s6lyc98obx9vfv7gy5tta8od5dxwlygjsejh433ec9vaywsanwxqylyppd9pzhojhba3xfgkyphqent1ia9z280esyd0uofwrwrt2cwaayi6pqe4hnyet61cewjx5ib2me65yeqnblykx0s7namvcdsj0g6gcc0dug6gb9t728hekaetjyxvw67za2wnhtyy3vfhlpw8bwmqjq1fx3n27v0ew68ivjweeav1onkaka6i0g3mr7afcpp30r2354w1u59ppklxkp9kfx4kcs9iji09qyus9d8wztkqk6cdd97omm1uuy2vy85xnqzgvj8h37gs7d2q0nn35wxc7sgvix1vx9d5jsxmyz82ysq1lv7e1a4b1j32jubb5uv7y5aqsm568pqdvq929myvnxxd3y439visfvatg8ll0azffm2w8tqjat0f0rz6abkbi5bkf9xxezomvkyjubqc9hspq5sppvyn0dw83uauy31dvn6iir5fgrozye95nt10fsfpc37dk2pdiudef06xjag7q3hjk2ua54xlhxxxsqneh2nb7qy3zcu2v610ul7v2dso1blibta8o099cavxuzzhq6fsz2b27x3z0b3o26g52f9be7zlygxcdod266gsoeoqjo0ld1e4qq9arqo3yxs0q2lu4sqb62zad0m7hrnpcdrx4lte877y4erlkr7r8zf4klyroqexnx3sl8wybrfnuattidpgcajt2x52htddjqrcq2yn4pgwggbozyg95yxius815io70f9sqihy8f5gtlaznnehohtldv4o524nsr8trx1kxcvwopn4qyppjfuy2h31rr4t0eorvzvte29xrzzd1ggot6gl62ut21wx2khn9dnmfkbhcp3w8butxd66hymw8hbwslapn3750tow8jcn1fep8t7qjtbyjw9ueo8adsxffyxj6vy5hhvfg2oma67h81x4lvvc5qx1ev7cam208ixey35c72jv7whv5g2jtimmc2on53mco8dvlwjegs9tm02k1yz4ij7zr4qreq9koclmals3fwsok98367iuwyyd23xfp4wg1q0o11i7dt2yu2vpi6oimig3xlx9d8lpdc6meux15i8l4tdshaxb8tp8ya6nh85f3llx2ae5px1b1ymrejmy0ukby1tiionqr2ztjif2q3gjiye5qun0380mn5jswhq8oabtdr8dq14c37t5ln59az4oilppp5ycyzdogdo377wephhp6ietvikc223h7o2p0bptunckq71pxiztfoyi48ixvlaufe3f5vnncqdkq5v6jxchj0gwizwuwqq2ka721uhz89soyfospuuv07jaacbfx204nhd0rj8070olyxu4n3vcecimxr5kfjubrvvojhga35t2c56ic7a67uzdr57cmajezc84eevvab7d6kubrevmi9e11fs168xywctgvgr6nma1wy91i7hla197e91md914dift6ztfhlqbvx7xk05sid69wikxecrogkcandyvpph86x8d2gg8xbq9glesrwqdbwgm6xmwucnr8uuuuoa7gu52cuc6dbcrjmipip7h3v43wsv8w3naz1y77kev4fyeiarbw1b10xa957k0zth168esp1jma931y7svhfsd2k20r44h15f7txbe4h1o2oezyh3y9a3x97xvsps8bymm5n7588li0o6qnnsxr4apgtyk8hnq1j5d5pnmaa2da7ayf35to81ddlrtpm1g213hp7xki9gt8yw18zu5uyps724ljz7wmk0ow9a5681tbz5f1wki8q1yf5yeowa5ghr76q3675sdfzle6svow12uble9m3f77jes5xc0nbfuqlmzj2xgyzb64n051feacgwwmqfxwvqsnfa76wqinngy29kmfuk31gnd2cqqnhn9vkrhxjgx27oqahykzme3cr5oyzxmyk7af9kqz6cepsxrentlpuudjq740c9nggnqkica7kxblc7om2kty55ancad09wxlm9u2zxpqj85vcpgsol1qp3bw1jan7arjyeddetq9o68dyzfmqf5zjqhfya1xn5frw02ja0epqkh4vwnes1myznu7ad9bu4qk1adoc8g5c1kxcn510l5ssa8xp86p5ymkmesf8hu8s0vyvsfq58yje0kjyzizv1ffn5dl7u6sc3mot0fufq96l2qrqczlm1mlx5c6n41o7qxlf3fpryc29ivg8k6u8rk50t05nz3h2cjdo0jkk2raohvhn2g2qdd8w63pc1o0dbk2f0eh7u3qoa5xs2t7nfdlotvxjxs8f2ho0dntdh1zhuzyuwuoka9zsorsfxvsrk0n5uaaxnylwax1xgxtctau1yn0107vvm8z6v44z4xas1yv8krlrgnfy8ltsae9zy9zt209kdtp23ti8yheh5spcz4lpx00wix8el0pv2i1vfycxr0pirjmegfyf19r7rwsyf0wosr8nbjo9x4mrk2ocx95gfobwcfd2l1tpe2rucjg1thpqycnd446ei2lwd2xzdg3loe5grhlyy86so0z49sjyiv65ig6q94aa02yieiuc8ak12s4v7p3yxlt1aib5yv3i6ue5o0qz85rr52sravw1s7auletql5m0s1wir44jcuvqu8lymvgbg2f4ho9q7f04qdjywltutq4vx6dgrwijvavqm3jjxj52o8y7snrv8syfnpijeksfshsp4nz31lhbpx14ii89151ueretfbivd9eo9wqt6otvo9rn7g7rswwdqfz1rkpqpa6dh73d7h52w0bazkslwztkdoa09f71fqja9ls83ofq80v39rjt9aiq57fkiezy5pvgbbqmw3jy8jzmaexpc2z4np5sh97d3ifsh023 == \6\t\9\5\v\8\9\c\i\h\v\7\p\1\k\k\6\6\f\e\1\e\v\u\a\r\z\y\z\o\c\g\e\2\4\e\8\u\x\9\e\d\x\m\a\u\4\v\4\z\6\j\2\4\c\e\m\u\6\3\p\t\w\4\0\3\u\2\b\k\p\z\h\a\c\4\n\x\4\1\q\i\2\1\w\h\0\7\8\p\g\w\r\5\v\u\k\p\t\2\b\r\c\o\5\8\a\m\9\2\w\u\r\7\p\w\1\l\1\3\q\9\o\2\8\3\2\o\6\s\o\c\k\5\i\h\n\9\c\a\e\r\h\f\v\o\p\5\j\n\w\s\4\z\4\m\y\8\4\y\8\8\l\d\r\5\g\0\p\u\4\0\a\z\8\t\t\m\q\1\7\b\u\k\9\u\y\g\s\e\c\h\z\3\9\8\q\h\b\p\r\0\t\t\0\c\h\t\w\3\w\f\s\r\r\i\u\1\n\r\2\n\e\g\3\p\8\a\v\f\3\s\9\g\d\l\e\4\n\5\5\7\c\g\2\t\2\y\y\w\y\x\z\j\4\n\1\k\f\5\z\c\9\g\4\6\e\j\6\2\6\9\c\w\o\a\d\g\3\3\f\m\h\o\o\r\r\s\z\d\6\w\c\q\f\i\j\2\d\w\1\p\n\y\3\e\v\y\0\b\1\j\n\k\8\8\r\r\s\x\t\s\9\j\q\e\c\t\h\x\q\8\v\b\v\h\z\k\n\8\q\e\8\5\y\y\j\g\1\f\6\8\s\p\z\f\m\c\x\f\h\w\1\s\h\7\1\w\2\v\9\8\b\9\x\q\3\0\2\j\r\i\1\v\n\e\e\d\6\e\p\h\l\f\l\v\s\6\o\r\s\q\0\s\6\7\m\2\j\o\k\8\d\d\i\f\z\b\8\x\h\y\g\t\5\3\1\s\5\u\9\l\n\5\o\w\n\z\1\3\w\i\2\f\q\a\l\h\m\w\9\d\0\x\w\8\4\3\f\j\h\z\f\z\i\p\a\b\r\7\0\1\2\q\3\5\3\e\q\6\9\m\s\g\k\i\9\a\0\u\9\2\k\g\5\d\l\e\l\w\4\g\l\t\u\2\8\i\u\5\c\0\x\y\n\z\s\z\t\a\s\p\b\0\k\e\u\l\7\o\n\q\z\j\n\4\i\u\f\i\1\k\1\g\a\q\s\a\q\z\w\v\2\t\q\l\n\y\0\7\6\6\5\1\e\a\5\u\k\f\l\8\n\8\y\0\y\u\n\r\3\j\a\5\p\8\6\w\9\u\k\o\j\r\r\u\4\r\k\2\3\5\t\s\c\t\x\x\i\z\i\p\z\n\v\k\2\t\s\7\b\e\j\m\z\b\k\j\j\m\j\i\r\l\r\z\r\h\7\r\1\s\4\1\h\k\4\s\a\m\1\r\a\5\t\l\b\f\c\1\q\q\5\l\j\v\4\p\c\l\5\v\i\k\p\9\h\9\4\a\2\k\w\2\e\a\w\a\d\7\k\m\5\s\x\u\4\m\l\g\j\k\t\i\b\6\7\v\m\o\x\3\x\9\b\l\z\l\w\9\o\3\0\o\n\c\k\0\p\u\8\b\w\p\v\z\8\z\8\9\g\d\c\t\h\b\s\g\o\4\q\5\x\a\a\c\b\o\d\e\3\e\7\y\1\0\1\r\4\z\3\5\t\a\6\e\n\p\7\4\3\t\n\x\1\h\p\u\w\e\h\p\2\v\r\p\j\y\2\b\3\6\4\4\7\1\u\w\9\c\u\r\k\r\m\h\0\6\g\o\e\s\5\i\h\x\3\8\g\o\f\c\s\p\8\9\6\v\c\5\4\4\a\h\b\e\8\l\y\v\e\u\b\6\0\w\5\8\e\6\n\l\j\a\e\b\o\l\2\3\2\u\3\m\d\x\r\1\s\z\c\q\b\i\6\n\z\t\u\n\j\6\a\s\e\4\h\1\v\s\q\p\2\c\3\u\q\d\a\j\t\l\g\h\x\s\n\e\z\e\s\d\5\z\9\p\p\2\m\5\6\k\f\h\z\6\u\j\i\t\z\o\9\l\r\c\l\v\7\k\6\7\0\y\s\k\b\j\s\8\i\z\8\9\r\t\e\u\j\r\l\q\k\r\w\g\z\w\y\g\w\p\4\w\w\t\a\j\d\g\9\9\q\h\x\w\r\o\7\o\m\u\e\y\p\o\4\b\g\l\x\i\t\1\8\6\t\m\v\v\r\o\r\j\o\o\f\d\c\t\v\4\8\p\6\l\q\t\o\r\p\6\h\w\h\2\a\s\z\k\3\h\p\h\h\p\z\5\r\e\k\6\m\o\3\e\d\9\t\l\m\q\k\6\3\z\d\p\r\a\l\k\y\d\1\w\4\o\p\8\7\s\p\r\u\i\e\p\g\6\w\u\z\0\4\m\v\y\2\o\q\8\9\v\e\u\x\z\z\s\a\3\2\x\7\9\k\w\1\a\r\z\8\k\6\h\u\o\b\n\i\m\m\p\z\1\7\w\f\e\x\7\j\w\o\5\q\4\m\r\s\d\a\5\s\5\v\x\v\c\i\p\z\c\6\y\i\z\y\b\z\a\7\o\z\w\o\g\4\0\s\6\l\y\c\9\8\o\b\x\9\v\f\v\7\g\y\5\t\t\a\8\o\d\5\d\x\w\l\y\g\j\s\e\j\h\4\3\3\e\c\9\v\a\y\w\s\a\n\w\x\q\y\l\y\p\p\d\9\p\z\h\o\j\h\b\a\3\x\f\g\k\y\p\h\q\e\n\t\1\i\a\9\z\2\8\0\e\s\y\d\0\u\o\f\w\r\w\r\t\2\c\w\a\a\y\i\6\p\q\e\4\h\n\y\e\t\6\1\c\e\w\j\x\5\i\b\2\m\e\6\5\y\e\q\n\b\l\y\k\x\0\s\7\n\a\m\v\c\d\s\j\0\g\6\g\c\c\0\d\u\g\6\g\b\9\t\7\2\8\h\e\k\a\e\t\j\y\x\v\w\6\7\z\a\2\w\n\h\t\y\y\3\v\f\h\l\p\w\8\b\w\m\q\j\q\1\f\x\3\n\2\7\v\0\e\w\6\8\i\v\j\w\e\e\a\v\1\o\n\k\a\k\a\6\i\0\g\3\m\r\7\a\f\c\p\p\3\0\r\2\3\5\4\w\1\u\5\9\p\p\k\l\x\k\p\9\k\f\x\4\k\c\s\9\i\j\i\0\9\q\y\u\s\9\d\8\w\z\t\k\q\k\6\c\d\d\9\7\o\m\m\1\u\u\y\2\v\y\8\5\x\n\q\z\g\v\j\8\h\3\7\g\s\7\d\2\q\0\n\n\3\5\w\x\c\7\s\g\v\i\x\1\v\x\9\d\5\j\s\x\m\y\z\8\2\y\s\q\1\l\v\7\e\1\a\4\b\1\j\3\2\j\u\b\b\5\u\v\7\y\5\a\q\s\m\5\6\8\p\q\d\v\q\9\2\9\m\y\v\n\x\x\d\3\y\4\3\9\v\i\s\f\v\a\t\g\8\l\l\0\a\z\f\f\m\2\w\8\t\q\j\a\t\0\f\0\r\z\6\a\b\k\b\i\5\b\k\f\9\x\x\e\z\o\m\v\k\y\j\u\b\q\c\9\h\s\p\q\5\s\p\p\v\y\n\0\d\w\8\3\u\a\u\y\3\1\d\v\n\6\i\i\r\5\f\g\r\o\z\y\e\9\5\n\t\1\0\f\s\f\p\c\3\7\d\k\2\p\d\i\u\d\e\f\0\6\x\j\a\g\7\q\3\h\j\k\2\u\a\5\4\x\l\h\x\x\x\s\q\n\e\h\2\n\b\7\q\y\3\z\c\u\2\v\6\1\0\u\l\7\v\2\d\s\o\1\b\l\i\b\t\a\8\o\0\9\9\c\a\v\x\u\z\z\h\q\6\f\s\z\2\b\2\7\x\3\z\0\b\3\o\2\6\g\5\2\f\9\b\e\7\z\l\y\g\x\c\d\o\d\2\6\6\g\s\o\e\o\q\j\o\0\l\d\1\e\4\q\q\9\a\r\q\o\3\y\x\s\0\q\2\l\u\4\s\q\b\6\2\z\a\d\0\m\7\h\r\n\p\c\d\r\x\4\l\t\e\8\7\7\y\4\e\r\l\k\r\7\r\8\z\f\4\k\l\y\r\o\q\e\x\n\x\3\s\l\8\w\y\b\r\f\n\u\a\t\t\i\d\p\g\c\a\j\t\2\x\5\2\h\t\d\d\j\q\r\c\q\2\y\n\4\p\g\w\g\g\b\o\z\y\g\9\5\y\x\i\u\s\8\1\5\i\o\7\0\f\9\s\q\i\h\y\8\f\5\g\t\l\a\z\n\n\e\h\o\h\t\l\d\v\4\o\5\2\4\n\s\r\8\t\r\x\1\k\x\c\v\w\o\p\n\4\q\y\p\p\j\f\u\y\2\h\3\1\r\r\4\t\0\e\o\r\v\z\v\t\e\2\9\x\r\z\z\d\1\g\g\o\t\6\g\l\6\2\u\t\2\1\w\x\2\k\h\n\9\d\n\m\f\k\b\h\c\p\3\w\8\b\u\t\x\d\6\6\h\y\m\w\8\h\b\w\s\l\a\p\n\3\7\5\0\t\o\w\8\j\c\n\1\f\e\p\8\t\7\q\j\t\b\y\j\w\9\u\e\o\8\a\d\s\x\f\f\y\x\j\6\v\y\5\h\h\v\f\g\2\o\m\a\6\7\h\8\1\x\4\l\v\v\c\5\q\x\1\e\v\7\c\a\m\2\0\8\i\x\e\y\3\5\c\7\2\j\v\7\w\h\v\5\g\2\j\t\i\m\m\c\2\o\n\5\3\m\c\o\8\d\v\l\w\j\e\g\s\9\t\m\0\2\k\1\y\z\4\i\j\7\z\r\4\q\r\e\q\9\k\o\c\l\m\a\l\s\3\f\w\s\o\k\9\8\3\6\7\i\u\w\y\y\d\2\3\x\f\p\4\w\g\1\q\0\o\1\1\i\7\d\t\2\y\u\2\v\p\i\6\o\i\m\i\g\3\x\l\x\9\d\8\l\p\d\c\6\m\e\u\x\1\5\i\8\l\4\t\d\s\h\a\x\b\8\t\p\8\y\a\6\n\h\8\5\f\3\l\l\x\2\a\e\5\p\x\1\b\1\y\m\r\e\j\m\y\0\u\k\b\y\1\t\i\i\o\n\q\r\2\z\t\j\i\f\2\q\3\g\j\i\y\e\5\q\u\n\0\3\8\0\m\n\5\j\s\w\h\q\8\o\a\b\t\d\r\8\d\q\1\4\c\3\7\t\5\l\n\5\9\a\z\4\o\i\l\p\p\p\5\y\c\y\z\d\o\g\d\o\3\7\7\w\e\p\h\h\p\6\i\e\t\v\i\k\c\2\2\3\h\7\o\2\p\0\b\p\t\u\n\c\k\q\7\1\p\x\i\z\t\f\o\y\i\4\8\i\x\v\l\a\u\f\e\3\f\5\v\n\n\c\q\d\k\q\5\v\6\j\x\c\h\j\0\g\w\i\z\w\u\w\q\q\2\k\a\7\2\1\u\h\z\8\9\s\o\y\f\o\s\p\u\u\v\0\7\j\a\a\c\b\f\x\2\0\4\n\h\d\0\r\j\8\0\7\0\o\l\y\x\u\4\n\3\v\c\e\c\i\m\x\r\5\k\f\j\u\b\r\v\v\o\j\h\g\a\3\5\t\2\c\5\6\i\c\7\a\6\7\u\z\d\r\5\7\c\m\a\j\e\z\c\8\4\e\e\v\v\a\b\7\d\6\k\u\b\r\e\v\m\i\9\e\1\1\f\s\1\6\8\x\y\w\c\t\g\v\g\r\6\n\m\a\1\w\y\9\1\i\7\h\l\a\1\9\7\e\9\1\m\d\9\1\4\d\i\f\t\6\z\t\f\h\l\q\b\v\x\7\x\k\0\5\s\i\d\6\9\w\i\k\x\e\c\r\o\g\k\c\a\n\d\y\v\p\p\h\8\6\x\8\d\2\g\g\8\x\b\q\9\g\l\e\s\r\w\q\d\b\w\g\m\6\x\m\w\u\c\n\r\8\u\u\u\u\o\a\7\g\u\5\2\c\u\c\6\d\b\c\r\j\m\i\p\i\p\7\h\3\v\4\3\w\s\v\8\w\3\n\a\z\1\y\7\7\k\e\v\4\f\y\e\i\a\r\b\w\1\b\1\0\x\a\9\5\7\k\0\z\t\h\1\6\8\e\s\p\1\j\m\a\9\3\1\y\7\s\v\h\f\s\d\2\k\2\0\r\4\4\h\1\5\f\7\t\x\b\e\4\h\1\o\2\o\e\z\y\h\3\y\9\a\3\x\9\7\x\v\s\p\s\8\b\y\m\m\5\n\7\5\8\8\l\i\0\o\6\q\n\n\s\x\r\4\a\p\g\t\y\k\8\h\n\q\1\j\5\d\5\p\n\m\a\a\2\d\a\7\a\y\f\3\5\t\o\8\1\d\d\l\r\t\p\m\1\g\2\1\3\h\p\7\x\k\i\9\g\t\8\y\w\1\8\z\u\5\u\y\p\s\7\2\4\l\j\z\7\w\m\k\0\o\w\9\a\5\6\8\1\t\b\z\5\f\1\w\k\i\8\q\1\y\f\5\y\e\o\w\a\5\g\h\r\7\6\q\3\6\7\5\s\d\f\z\l\e\6\s\v\o\w\1\2\u\b\l\e\9\m\3\f\7\7\j\e\s\5\x\c\0\n\b\f\u\q\l\m\z\j\2\x\g\y\z\b\6\4\n\0\5\1\f\e\a\c\g\w\w\m\q\f\x\w\v\q\s\n\f\a\7\6\w\q\i\n\n\g\y\2\9\k\m\f\u\k\3\1\g\n\d\2\c\q\q\n\h\n\9\v\k\r\h\x\j\g\x\2\7\o\q\a\h\y\k\z\m\e\3\c\r\5\o\y\z\x\m\y\k\7\a\f\9\k\q\z\6\c\e\p\s\x\r\e\n\t\l\p\u\u\d\j\q\7\4\0\c\9\n\g\g\n\q\k\i\c\a\7\k\x\b\l\c\7\o\m\2\k\t\y\5\5\a\n\c\a\d\0\9\w\x\l\m\9\u\2\z\x\p\q\j\8\5\v\c\p\g\s\o\l\1\q\p\3\b\w\1\j\a\n\7\a\r\j\y\e\d\d\e\t\q\9\o\6\8\d\y\z\f\m\q\f\5\z\j\q\h\f\y\a\1\x\n\5\f\r\w\0\2\j\a\0\e\p\q\k\h\4\v\w\n\e\s\1\m\y\z\n\u\7\a\d\9\b\u\4\q\k\1\a\d\o\c\8\g\5\c\1\k\x\c\n\5\1\0\l\5\s\s\a\8\x\p\8\6\p\5\y\m\k\m\e\s\f\8\h\u\8\s\0\v\y\v\s\f\q\5\8\y\j\e\0\k\j\y\z\i\z\v\1\f\f\n\5\d\l\7\u\6\s\c\3\m\o\t\0\f\u\f\q\9\6\l\2\q\r\q\c\z\l\m\1\m\l\x\5\c\6\n\4\1\o\7\q\x\l\f\3\f\p\r\y\c\2\9\i\v\g\8\k\6\u\8\r\k\5\0\t\0\5\n\z\3\h\2\c\j\d\o\0\j\k\k\2\r\a\o\h\v\h\n\2\g\2\q\d\d\8\w\6\3\p\c\1\o\0\d\b\k\2\f\0\e\h\7\u\3\q\o\a\5\x\s\2\t\7\n\f\d\l\o\t\v\x\j\x\s\8\f\2\h\o\0\d\n\t\d\h\1\z\h\u\z\y\u\w\u\o\k\a\9\z\s\o\r\s\f\x\v\s\r\k\0\n\5\u\a\a\x\n\y\l\w\a\x\1\x\g\x\t\c\t\a\u\1\y\n\0\1\0\7\v\v\m\8\z\6\v\4\4\z\4\x\a\s\1\y\v\8\k\r\l\r\g\n\f\y\8\l\t\s\a\e\9\z\y\9\z\t\2\0\9\k\d\t\p\2\3\t\i\8\y\h\e\h\5\s\p\c\z\4\l\p\x\0\0\w\i\x\8\e\l\0\p\v\2\i\1\v\f\y\c\x\r\0\p\i\r\j\m\e\g\f\y\f\1\9\r\7\r\w\s\y\f\0\w\o\s\r\8\n\b\j\o\9\x\4\m\r\k\2\o\c\x\9\5\g\f\o\b\w\c\f\d\2\l\1\t\p\e\2\r\u\c\j\g\1\t\h\p\q\y\c\n\d\4\4\6\e\i\2\l\w\d\2\x\z\d\g\3\l\o\e\5\g\r\h\l\y\y\8\6\s\o\0\z\4\9\s\j\y\i\v\6\5\i\g\6\q\9\4\a\a\0\2\y\i\e\i\u\c\8\a\k\1\2\s\4\v\7\p\3\y\x\l\t\1\a\i\b\5\y\v\3\i\6\u\e\5\o\0\q\z\8\5\r\r\5\2\s\r\a\v\w\1\s\7\a\u\l\e\t\q\l\5\m\0\s\1\w\i\r\4\4\j\c\u\v\q\u\8\l\y\m\v\g\b\g\2\f\4\h\o\9\q\7\f\0\4\q\d\j\y\w\l\t\u\t\q\4\v\x\6\d\g\r\w\i\j\v\a\v\q\m\3\j\j\x\j\5\2\o\8\y\7\s\n\r\v\8\s\y\f\n\p\i\j\e\k\s\f\s\h\s\p\4\n\z\3\1\l\h\b\p\x\1\4\i\i\8\9\1\5\1\u\e\r\e\t\f\b\i\v\d\9\e\o\9\w\q\t\6\o\t\v\o\9\r\n\7\g\7\r\s\w\w\d\q\f\z\1\r\k\p\q\p\a\6\d\h\7\3\d\7\h\5\2\w\0\b\a\z\k\s\l\w\z\t\k\d\o\a\0\9\f\7\1\f\q\j\a\9\l\s\8\3\o\f\q\8\0\v\3\9\r\j\t\9\a\i\q\5\7\f\k\i\e\z\y\5\p\v\g\b\b\q\m\w\3\j\y\8\j\z\m\a\e\x\p\c\2\z\4\n\p\5\s\h\9\7\d\3\i\f\s\h\0\2\3 ]] 00:19:15.875 00:19:15.875 real 0m1.461s 00:19:15.875 user 0m0.982s 00:19:15.875 sys 0m0.349s 00:19:15.875 21:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:15.875 21:32:36 -- common/autotest_common.sh@10 -- # set +x 00:19:15.875 ************************************ 00:19:15.875 21:32:36 -- dd/basic_rw.sh@1 -- # cleanup 00:19:15.875 21:32:36 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:19:15.875 21:32:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:15.875 21:32:36 -- dd/common.sh@11 -- # local nvme_ref= 00:19:15.875 21:32:36 -- dd/common.sh@12 -- # local size=0xffff 00:19:15.875 21:32:36 -- dd/common.sh@14 -- # local bs=1048576 00:19:15.875 21:32:36 -- dd/common.sh@15 -- # local count=1 00:19:15.875 21:32:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:19:15.875 21:32:36 -- dd/common.sh@18 -- # gen_conf 00:19:15.875 21:32:36 -- dd/common.sh@31 -- # xtrace_disable 00:19:15.875 21:32:36 -- common/autotest_common.sh@10 -- # set +x 00:19:15.875 [2024-07-11 21:32:36.675178] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:15.875 [2024-07-11 21:32:36.675892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70202 ] 00:19:15.875 { 00:19:15.875 "subsystems": [ 00:19:15.875 { 00:19:15.875 "subsystem": "bdev", 00:19:15.875 "config": [ 00:19:15.875 { 00:19:15.875 "params": { 00:19:15.875 "trtype": "pcie", 00:19:15.875 "traddr": "0000:00:06.0", 00:19:15.875 "name": "Nvme0" 00:19:15.875 }, 00:19:15.875 "method": "bdev_nvme_attach_controller" 00:19:15.875 }, 00:19:15.875 { 00:19:15.875 "method": "bdev_wait_for_examine" 00:19:15.875 } 00:19:15.875 ] 00:19:15.875 } 00:19:15.875 ] 00:19:15.875 } 00:19:15.875 [2024-07-11 21:32:36.812280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.134 [2024-07-11 21:32:36.911647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.392  Copying: 1024/1024 [kB] (average 500 MBps) 00:19:16.392 00:19:16.392 21:32:37 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:16.392 00:19:16.392 real 0m19.024s 00:19:16.392 user 0m13.494s 00:19:16.392 sys 0m4.018s 00:19:16.392 21:32:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.392 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:16.392 ************************************ 00:19:16.392 END TEST spdk_dd_basic_rw 00:19:16.392 ************************************ 00:19:16.392 21:32:37 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:19:16.392 21:32:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:16.392 21:32:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:16.392 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:16.650 ************************************ 00:19:16.650 START TEST spdk_dd_posix 00:19:16.650 ************************************ 00:19:16.650 21:32:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:19:16.650 * Looking for test storage... 00:19:16.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:16.650 21:32:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.650 21:32:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.650 21:32:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.650 21:32:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.650 21:32:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.650 21:32:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.650 21:32:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.650 21:32:37 -- paths/export.sh@5 -- # export PATH 00:19:16.650 21:32:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.650 21:32:37 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:19:16.650 21:32:37 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:19:16.650 21:32:37 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:19:16.650 21:32:37 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:19:16.650 21:32:37 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:16.650 21:32:37 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:16.650 21:32:37 -- dd/posix.sh@130 -- # tests 00:19:16.650 21:32:37 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:19:16.650 * First test run, liburing in use 00:19:16.650 21:32:37 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:19:16.650 21:32:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:16.650 21:32:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:16.650 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:16.650 ************************************ 00:19:16.650 START TEST dd_flag_append 00:19:16.650 ************************************ 00:19:16.650 21:32:37 -- common/autotest_common.sh@1104 -- # append 00:19:16.650 21:32:37 -- dd/posix.sh@16 -- # local dump0 00:19:16.650 21:32:37 -- dd/posix.sh@17 -- # local dump1 00:19:16.650 21:32:37 -- dd/posix.sh@19 -- # gen_bytes 32 00:19:16.650 21:32:37 -- dd/common.sh@98 -- # xtrace_disable 00:19:16.650 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:16.650 21:32:37 -- dd/posix.sh@19 -- # dump0=jdshuz4qsajzo1ubb69m08hswbpa2tmj 00:19:16.650 21:32:37 -- dd/posix.sh@20 -- # gen_bytes 32 00:19:16.650 21:32:37 -- dd/common.sh@98 -- # xtrace_disable 00:19:16.650 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:19:16.650 21:32:37 -- dd/posix.sh@20 -- # dump1=81vqt4s884cpz9j0gxx7xbek09g71aoe 00:19:16.650 21:32:37 -- dd/posix.sh@22 -- # printf %s jdshuz4qsajzo1ubb69m08hswbpa2tmj 00:19:16.650 21:32:37 -- dd/posix.sh@23 -- # printf %s 81vqt4s884cpz9j0gxx7xbek09g71aoe 00:19:16.650 21:32:37 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:19:16.650 [2024-07-11 21:32:37.489710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:16.650 [2024-07-11 21:32:37.489827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70258 ] 00:19:16.908 [2024-07-11 21:32:37.622303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.908 [2024-07-11 21:32:37.731510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.165  Copying: 32/32 [B] (average 31 kBps) 00:19:17.165 00:19:17.165 21:32:38 -- dd/posix.sh@27 -- # [[ 81vqt4s884cpz9j0gxx7xbek09g71aoejdshuz4qsajzo1ubb69m08hswbpa2tmj == \8\1\v\q\t\4\s\8\8\4\c\p\z\9\j\0\g\x\x\7\x\b\e\k\0\9\g\7\1\a\o\e\j\d\s\h\u\z\4\q\s\a\j\z\o\1\u\b\b\6\9\m\0\8\h\s\w\b\p\a\2\t\m\j ]] 00:19:17.165 00:19:17.165 real 0m0.631s 00:19:17.165 user 0m0.357s 00:19:17.165 sys 0m0.150s 00:19:17.165 21:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.165 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:19:17.165 ************************************ 00:19:17.165 END TEST dd_flag_append 00:19:17.165 ************************************ 00:19:17.165 21:32:38 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:19:17.165 21:32:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:17.165 21:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:17.165 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:19:17.423 ************************************ 00:19:17.423 START TEST dd_flag_directory 00:19:17.423 ************************************ 00:19:17.423 21:32:38 -- common/autotest_common.sh@1104 -- # directory 00:19:17.423 21:32:38 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:17.423 21:32:38 -- common/autotest_common.sh@640 -- # local es=0 00:19:17.423 21:32:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:17.423 21:32:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.423 21:32:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.423 21:32:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.423 21:32:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.423 21:32:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.423 21:32:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.423 21:32:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.423 21:32:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:17.423 21:32:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:17.423 [2024-07-11 21:32:38.162716] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:17.423 [2024-07-11 21:32:38.162822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70290 ] 00:19:17.423 [2024-07-11 21:32:38.298684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.681 [2024-07-11 21:32:38.398168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.681 [2024-07-11 21:32:38.488849] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:17.681 [2024-07-11 21:32:38.488918] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:17.681 [2024-07-11 21:32:38.488934] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:17.681 [2024-07-11 21:32:38.604341] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:17.939 21:32:38 -- common/autotest_common.sh@643 -- # es=236 00:19:17.939 21:32:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:17.939 21:32:38 -- common/autotest_common.sh@652 -- # es=108 00:19:17.939 21:32:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:17.939 21:32:38 -- common/autotest_common.sh@660 -- # es=1 00:19:17.939 21:32:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:17.939 21:32:38 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:17.939 21:32:38 -- common/autotest_common.sh@640 -- # local es=0 00:19:17.939 21:32:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:17.939 21:32:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.939 21:32:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.939 21:32:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.939 21:32:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.939 21:32:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.939 21:32:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.939 21:32:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.939 21:32:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:17.940 21:32:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:17.940 [2024-07-11 21:32:38.763746] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:17.940 [2024-07-11 21:32:38.763902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70300 ] 00:19:18.198 [2024-07-11 21:32:38.907669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.198 [2024-07-11 21:32:39.011341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.198 [2024-07-11 21:32:39.105104] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:18.198 [2024-07-11 21:32:39.105163] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:18.198 [2024-07-11 21:32:39.105178] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:18.456 [2024-07-11 21:32:39.219918] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:18.456 21:32:39 -- common/autotest_common.sh@643 -- # es=236 00:19:18.456 21:32:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:18.456 21:32:39 -- common/autotest_common.sh@652 -- # es=108 00:19:18.456 21:32:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:18.456 21:32:39 -- common/autotest_common.sh@660 -- # es=1 00:19:18.456 21:32:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:18.456 00:19:18.456 real 0m1.197s 00:19:18.456 user 0m0.677s 00:19:18.456 sys 0m0.308s 00:19:18.456 21:32:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.456 21:32:39 -- common/autotest_common.sh@10 -- # set +x 00:19:18.456 ************************************ 00:19:18.456 END TEST dd_flag_directory 00:19:18.456 ************************************ 00:19:18.456 21:32:39 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:19:18.456 21:32:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:18.456 21:32:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:18.456 21:32:39 -- common/autotest_common.sh@10 -- # set +x 00:19:18.456 ************************************ 00:19:18.456 START TEST dd_flag_nofollow 00:19:18.456 ************************************ 00:19:18.456 21:32:39 -- common/autotest_common.sh@1104 -- # nofollow 00:19:18.456 21:32:39 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:18.456 21:32:39 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:18.456 21:32:39 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:18.456 21:32:39 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:18.456 21:32:39 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:18.456 21:32:39 -- common/autotest_common.sh@640 -- # local es=0 00:19:18.456 21:32:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:18.456 21:32:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.456 21:32:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.456 21:32:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.456 21:32:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.456 21:32:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.456 21:32:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:18.456 21:32:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.456 21:32:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:18.456 21:32:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:18.714 [2024-07-11 21:32:39.423381] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:18.714 [2024-07-11 21:32:39.423525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70328 ] 00:19:18.714 [2024-07-11 21:32:39.565195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.973 [2024-07-11 21:32:39.666924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.973 [2024-07-11 21:32:39.754356] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:18.973 [2024-07-11 21:32:39.754421] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:18.973 [2024-07-11 21:32:39.754437] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:18.973 [2024-07-11 21:32:39.867663] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:19.253 21:32:39 -- common/autotest_common.sh@643 -- # es=216 00:19:19.253 21:32:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:19.253 21:32:39 -- common/autotest_common.sh@652 -- # es=88 00:19:19.253 21:32:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:19.253 21:32:39 -- common/autotest_common.sh@660 -- # es=1 00:19:19.253 21:32:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:19.253 21:32:39 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:19.253 21:32:39 -- common/autotest_common.sh@640 -- # local es=0 00:19:19.253 21:32:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:19.253 21:32:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.253 21:32:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:19.253 21:32:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.253 21:32:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:19.253 21:32:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.253 21:32:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:19.253 21:32:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.253 21:32:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:19.253 21:32:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:19.253 [2024-07-11 21:32:40.010532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:19.253 [2024-07-11 21:32:40.010669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70339 ] 00:19:19.253 [2024-07-11 21:32:40.152227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.535 [2024-07-11 21:32:40.248570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.535 [2024-07-11 21:32:40.334611] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:19.535 [2024-07-11 21:32:40.334675] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:19.535 [2024-07-11 21:32:40.334692] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:19.535 [2024-07-11 21:32:40.447942] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:19.792 21:32:40 -- common/autotest_common.sh@643 -- # es=216 00:19:19.792 21:32:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:19.792 21:32:40 -- common/autotest_common.sh@652 -- # es=88 00:19:19.792 21:32:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:19.792 21:32:40 -- common/autotest_common.sh@660 -- # es=1 00:19:19.792 21:32:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:19.792 21:32:40 -- dd/posix.sh@46 -- # gen_bytes 512 00:19:19.792 21:32:40 -- dd/common.sh@98 -- # xtrace_disable 00:19:19.792 21:32:40 -- common/autotest_common.sh@10 -- # set +x 00:19:19.792 21:32:40 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:19.792 [2024-07-11 21:32:40.588672] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:19.792 [2024-07-11 21:32:40.588773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70346 ] 00:19:19.792 [2024-07-11 21:32:40.722380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.050 [2024-07-11 21:32:40.825951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.308  Copying: 512/512 [B] (average 500 kBps) 00:19:20.308 00:19:20.308 21:32:41 -- dd/posix.sh@49 -- # [[ tvsw0lfspou6zdvyzfaza3xokmw6xwelwmjw1uy02faruab52rjblecd4kszfkmj0jd85ofwwrdr8ulpb537ndhfzs1ms8edev90wpmc222pnicdr6kb8c87yuakjvxkis0ks2gp1upy17pkmuqflxcdk7in1ky40lfxsn205sad85afk54la4en2t7bti2yne48ze9ga32vna5yoaijojxsatbj83m78vuwmhmfbue6tie27fq0gbzveaw1fd9ajjmfg7twap9v6dxotw1br5zw9xtzsrqpw45gi0c85vztqwhqljhp98rqtp6m5w9eb9ous0eyusswoumi1x20fzvo7kqoh97vlspcagxagvv5mecmvr7cqxzkxz178nngsyyrejhpqd4ijuy886k0mlwm3ib89ybzk5juouz1pbmq2ammxd9ve5x5zv14ymfodj1yzymbb8swnol8flm2b9q2phs7m218877x9yakypxivr9ndpksjemv1wziv724 == \t\v\s\w\0\l\f\s\p\o\u\6\z\d\v\y\z\f\a\z\a\3\x\o\k\m\w\6\x\w\e\l\w\m\j\w\1\u\y\0\2\f\a\r\u\a\b\5\2\r\j\b\l\e\c\d\4\k\s\z\f\k\m\j\0\j\d\8\5\o\f\w\w\r\d\r\8\u\l\p\b\5\3\7\n\d\h\f\z\s\1\m\s\8\e\d\e\v\9\0\w\p\m\c\2\2\2\p\n\i\c\d\r\6\k\b\8\c\8\7\y\u\a\k\j\v\x\k\i\s\0\k\s\2\g\p\1\u\p\y\1\7\p\k\m\u\q\f\l\x\c\d\k\7\i\n\1\k\y\4\0\l\f\x\s\n\2\0\5\s\a\d\8\5\a\f\k\5\4\l\a\4\e\n\2\t\7\b\t\i\2\y\n\e\4\8\z\e\9\g\a\3\2\v\n\a\5\y\o\a\i\j\o\j\x\s\a\t\b\j\8\3\m\7\8\v\u\w\m\h\m\f\b\u\e\6\t\i\e\2\7\f\q\0\g\b\z\v\e\a\w\1\f\d\9\a\j\j\m\f\g\7\t\w\a\p\9\v\6\d\x\o\t\w\1\b\r\5\z\w\9\x\t\z\s\r\q\p\w\4\5\g\i\0\c\8\5\v\z\t\q\w\h\q\l\j\h\p\9\8\r\q\t\p\6\m\5\w\9\e\b\9\o\u\s\0\e\y\u\s\s\w\o\u\m\i\1\x\2\0\f\z\v\o\7\k\q\o\h\9\7\v\l\s\p\c\a\g\x\a\g\v\v\5\m\e\c\m\v\r\7\c\q\x\z\k\x\z\1\7\8\n\n\g\s\y\y\r\e\j\h\p\q\d\4\i\j\u\y\8\8\6\k\0\m\l\w\m\3\i\b\8\9\y\b\z\k\5\j\u\o\u\z\1\p\b\m\q\2\a\m\m\x\d\9\v\e\5\x\5\z\v\1\4\y\m\f\o\d\j\1\y\z\y\m\b\b\8\s\w\n\o\l\8\f\l\m\2\b\9\q\2\p\h\s\7\m\2\1\8\8\7\7\x\9\y\a\k\y\p\x\i\v\r\9\n\d\p\k\s\j\e\m\v\1\w\z\i\v\7\2\4 ]] 00:19:20.308 ************************************ 00:19:20.308 END TEST dd_flag_nofollow 00:19:20.308 ************************************ 00:19:20.308 00:19:20.308 real 0m1.787s 00:19:20.308 user 0m1.018s 00:19:20.308 sys 0m0.436s 00:19:20.308 21:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.308 21:32:41 -- common/autotest_common.sh@10 -- # set +x 00:19:20.308 21:32:41 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:19:20.308 21:32:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:20.308 21:32:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:20.308 21:32:41 -- common/autotest_common.sh@10 -- # set +x 00:19:20.308 ************************************ 00:19:20.308 START TEST dd_flag_noatime 00:19:20.308 ************************************ 00:19:20.308 21:32:41 -- common/autotest_common.sh@1104 -- # noatime 00:19:20.308 21:32:41 -- dd/posix.sh@53 -- # local atime_if 00:19:20.308 21:32:41 -- dd/posix.sh@54 -- # local atime_of 00:19:20.308 21:32:41 -- dd/posix.sh@58 -- # gen_bytes 512 00:19:20.308 21:32:41 -- dd/common.sh@98 -- # xtrace_disable 00:19:20.308 21:32:41 -- common/autotest_common.sh@10 -- # set +x 00:19:20.308 21:32:41 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:20.308 21:32:41 -- dd/posix.sh@60 -- # atime_if=1720733560 00:19:20.308 21:32:41 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:20.308 21:32:41 -- dd/posix.sh@61 -- # atime_of=1720733561 00:19:20.308 21:32:41 -- dd/posix.sh@66 -- # sleep 1 00:19:21.683 21:32:42 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:21.683 [2024-07-11 21:32:42.281091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:21.683 [2024-07-11 21:32:42.281251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70387 ] 00:19:21.683 [2024-07-11 21:32:42.430666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.683 [2024-07-11 21:32:42.529689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.940  Copying: 512/512 [B] (average 500 kBps) 00:19:21.940 00:19:21.940 21:32:42 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:21.940 21:32:42 -- dd/posix.sh@69 -- # (( atime_if == 1720733560 )) 00:19:21.940 21:32:42 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:21.940 21:32:42 -- dd/posix.sh@70 -- # (( atime_of == 1720733561 )) 00:19:21.940 21:32:42 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:22.200 [2024-07-11 21:32:42.913205] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:22.200 [2024-07-11 21:32:42.913365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70404 ] 00:19:22.200 [2024-07-11 21:32:43.064076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.458 [2024-07-11 21:32:43.164532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.715  Copying: 512/512 [B] (average 500 kBps) 00:19:22.715 00:19:22.715 21:32:43 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:22.715 21:32:43 -- dd/posix.sh@73 -- # (( atime_if < 1720733563 )) 00:19:22.715 00:19:22.715 real 0m2.298s 00:19:22.715 user 0m0.690s 00:19:22.715 sys 0m0.345s 00:19:22.715 ************************************ 00:19:22.715 END TEST dd_flag_noatime 00:19:22.715 ************************************ 00:19:22.715 21:32:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.715 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:22.715 21:32:43 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:19:22.715 21:32:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:22.715 21:32:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:22.715 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:22.715 ************************************ 00:19:22.715 START TEST dd_flags_misc 00:19:22.715 ************************************ 00:19:22.715 21:32:43 -- common/autotest_common.sh@1104 -- # io 00:19:22.715 21:32:43 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:19:22.715 21:32:43 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:19:22.715 21:32:43 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:19:22.715 21:32:43 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:19:22.715 21:32:43 -- dd/posix.sh@86 -- # gen_bytes 512 00:19:22.715 21:32:43 -- dd/common.sh@98 -- # xtrace_disable 00:19:22.715 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:19:22.715 21:32:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:22.715 21:32:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:19:22.715 [2024-07-11 21:32:43.601545] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:22.715 [2024-07-11 21:32:43.601645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70430 ] 00:19:22.975 [2024-07-11 21:32:43.733683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.975 [2024-07-11 21:32:43.835222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.233  Copying: 512/512 [B] (average 500 kBps) 00:19:23.233 00:19:23.233 21:32:44 -- dd/posix.sh@93 -- # [[ ld41ooke6gw0pun2cxsyxmq3j4z9s2wtixgx16jctwijtr7hefv591s5uwdmwvab5c5732zxinslphunsj3vi8uhwtrmvtnck5ctwyena0bnzjgop6qauztfvrcdqnk8vgh8lo9valz5eaef5pvnrsgo3i396ml8apgbliyfi2l4enwqoh2grahv3123vj4ye378g8hn4344uvbg8ghp3xwvunjjssnxca6h6sq6o9gmyc8qkz52wg1itmgo0sjhaykq5mx3jm52mxj6z5w3a01na7lodt12ptx60w0gkp396u3lm918nlltno0lta2hbx2i3neq2udl6mdbhu9ssyypu307qe44cy4u05azbd4e1qhju9rj9oefjvrxoya03yj4hy3mzwy7l1rq821jg2ne7mcwt76qp2c7axkjz75j248uol9bw59ne2y25fhelogt76119rsclqbjc54enmrx7jr96666rgyv8pfuzl77mahqhlebv7aj5v0d5jed == \l\d\4\1\o\o\k\e\6\g\w\0\p\u\n\2\c\x\s\y\x\m\q\3\j\4\z\9\s\2\w\t\i\x\g\x\1\6\j\c\t\w\i\j\t\r\7\h\e\f\v\5\9\1\s\5\u\w\d\m\w\v\a\b\5\c\5\7\3\2\z\x\i\n\s\l\p\h\u\n\s\j\3\v\i\8\u\h\w\t\r\m\v\t\n\c\k\5\c\t\w\y\e\n\a\0\b\n\z\j\g\o\p\6\q\a\u\z\t\f\v\r\c\d\q\n\k\8\v\g\h\8\l\o\9\v\a\l\z\5\e\a\e\f\5\p\v\n\r\s\g\o\3\i\3\9\6\m\l\8\a\p\g\b\l\i\y\f\i\2\l\4\e\n\w\q\o\h\2\g\r\a\h\v\3\1\2\3\v\j\4\y\e\3\7\8\g\8\h\n\4\3\4\4\u\v\b\g\8\g\h\p\3\x\w\v\u\n\j\j\s\s\n\x\c\a\6\h\6\s\q\6\o\9\g\m\y\c\8\q\k\z\5\2\w\g\1\i\t\m\g\o\0\s\j\h\a\y\k\q\5\m\x\3\j\m\5\2\m\x\j\6\z\5\w\3\a\0\1\n\a\7\l\o\d\t\1\2\p\t\x\6\0\w\0\g\k\p\3\9\6\u\3\l\m\9\1\8\n\l\l\t\n\o\0\l\t\a\2\h\b\x\2\i\3\n\e\q\2\u\d\l\6\m\d\b\h\u\9\s\s\y\y\p\u\3\0\7\q\e\4\4\c\y\4\u\0\5\a\z\b\d\4\e\1\q\h\j\u\9\r\j\9\o\e\f\j\v\r\x\o\y\a\0\3\y\j\4\h\y\3\m\z\w\y\7\l\1\r\q\8\2\1\j\g\2\n\e\7\m\c\w\t\7\6\q\p\2\c\7\a\x\k\j\z\7\5\j\2\4\8\u\o\l\9\b\w\5\9\n\e\2\y\2\5\f\h\e\l\o\g\t\7\6\1\1\9\r\s\c\l\q\b\j\c\5\4\e\n\m\r\x\7\j\r\9\6\6\6\6\r\g\y\v\8\p\f\u\z\l\7\7\m\a\h\q\h\l\e\b\v\7\a\j\5\v\0\d\5\j\e\d ]] 00:19:23.233 21:32:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:23.233 21:32:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:23.491 [2024-07-11 21:32:44.188932] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:23.491 [2024-07-11 21:32:44.189034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70438 ] 00:19:23.491 [2024-07-11 21:32:44.322595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.491 [2024-07-11 21:32:44.422452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.005  Copying: 512/512 [B] (average 500 kBps) 00:19:24.005 00:19:24.005 21:32:44 -- dd/posix.sh@93 -- # [[ ld41ooke6gw0pun2cxsyxmq3j4z9s2wtixgx16jctwijtr7hefv591s5uwdmwvab5c5732zxinslphunsj3vi8uhwtrmvtnck5ctwyena0bnzjgop6qauztfvrcdqnk8vgh8lo9valz5eaef5pvnrsgo3i396ml8apgbliyfi2l4enwqoh2grahv3123vj4ye378g8hn4344uvbg8ghp3xwvunjjssnxca6h6sq6o9gmyc8qkz52wg1itmgo0sjhaykq5mx3jm52mxj6z5w3a01na7lodt12ptx60w0gkp396u3lm918nlltno0lta2hbx2i3neq2udl6mdbhu9ssyypu307qe44cy4u05azbd4e1qhju9rj9oefjvrxoya03yj4hy3mzwy7l1rq821jg2ne7mcwt76qp2c7axkjz75j248uol9bw59ne2y25fhelogt76119rsclqbjc54enmrx7jr96666rgyv8pfuzl77mahqhlebv7aj5v0d5jed == \l\d\4\1\o\o\k\e\6\g\w\0\p\u\n\2\c\x\s\y\x\m\q\3\j\4\z\9\s\2\w\t\i\x\g\x\1\6\j\c\t\w\i\j\t\r\7\h\e\f\v\5\9\1\s\5\u\w\d\m\w\v\a\b\5\c\5\7\3\2\z\x\i\n\s\l\p\h\u\n\s\j\3\v\i\8\u\h\w\t\r\m\v\t\n\c\k\5\c\t\w\y\e\n\a\0\b\n\z\j\g\o\p\6\q\a\u\z\t\f\v\r\c\d\q\n\k\8\v\g\h\8\l\o\9\v\a\l\z\5\e\a\e\f\5\p\v\n\r\s\g\o\3\i\3\9\6\m\l\8\a\p\g\b\l\i\y\f\i\2\l\4\e\n\w\q\o\h\2\g\r\a\h\v\3\1\2\3\v\j\4\y\e\3\7\8\g\8\h\n\4\3\4\4\u\v\b\g\8\g\h\p\3\x\w\v\u\n\j\j\s\s\n\x\c\a\6\h\6\s\q\6\o\9\g\m\y\c\8\q\k\z\5\2\w\g\1\i\t\m\g\o\0\s\j\h\a\y\k\q\5\m\x\3\j\m\5\2\m\x\j\6\z\5\w\3\a\0\1\n\a\7\l\o\d\t\1\2\p\t\x\6\0\w\0\g\k\p\3\9\6\u\3\l\m\9\1\8\n\l\l\t\n\o\0\l\t\a\2\h\b\x\2\i\3\n\e\q\2\u\d\l\6\m\d\b\h\u\9\s\s\y\y\p\u\3\0\7\q\e\4\4\c\y\4\u\0\5\a\z\b\d\4\e\1\q\h\j\u\9\r\j\9\o\e\f\j\v\r\x\o\y\a\0\3\y\j\4\h\y\3\m\z\w\y\7\l\1\r\q\8\2\1\j\g\2\n\e\7\m\c\w\t\7\6\q\p\2\c\7\a\x\k\j\z\7\5\j\2\4\8\u\o\l\9\b\w\5\9\n\e\2\y\2\5\f\h\e\l\o\g\t\7\6\1\1\9\r\s\c\l\q\b\j\c\5\4\e\n\m\r\x\7\j\r\9\6\6\6\6\r\g\y\v\8\p\f\u\z\l\7\7\m\a\h\q\h\l\e\b\v\7\a\j\5\v\0\d\5\j\e\d ]] 00:19:24.005 21:32:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:24.005 21:32:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:19:24.005 [2024-07-11 21:32:44.776387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:24.005 [2024-07-11 21:32:44.776514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70450 ] 00:19:24.005 [2024-07-11 21:32:44.914574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.262 [2024-07-11 21:32:45.018860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.519  Copying: 512/512 [B] (average 166 kBps) 00:19:24.519 00:19:24.519 21:32:45 -- dd/posix.sh@93 -- # [[ ld41ooke6gw0pun2cxsyxmq3j4z9s2wtixgx16jctwijtr7hefv591s5uwdmwvab5c5732zxinslphunsj3vi8uhwtrmvtnck5ctwyena0bnzjgop6qauztfvrcdqnk8vgh8lo9valz5eaef5pvnrsgo3i396ml8apgbliyfi2l4enwqoh2grahv3123vj4ye378g8hn4344uvbg8ghp3xwvunjjssnxca6h6sq6o9gmyc8qkz52wg1itmgo0sjhaykq5mx3jm52mxj6z5w3a01na7lodt12ptx60w0gkp396u3lm918nlltno0lta2hbx2i3neq2udl6mdbhu9ssyypu307qe44cy4u05azbd4e1qhju9rj9oefjvrxoya03yj4hy3mzwy7l1rq821jg2ne7mcwt76qp2c7axkjz75j248uol9bw59ne2y25fhelogt76119rsclqbjc54enmrx7jr96666rgyv8pfuzl77mahqhlebv7aj5v0d5jed == \l\d\4\1\o\o\k\e\6\g\w\0\p\u\n\2\c\x\s\y\x\m\q\3\j\4\z\9\s\2\w\t\i\x\g\x\1\6\j\c\t\w\i\j\t\r\7\h\e\f\v\5\9\1\s\5\u\w\d\m\w\v\a\b\5\c\5\7\3\2\z\x\i\n\s\l\p\h\u\n\s\j\3\v\i\8\u\h\w\t\r\m\v\t\n\c\k\5\c\t\w\y\e\n\a\0\b\n\z\j\g\o\p\6\q\a\u\z\t\f\v\r\c\d\q\n\k\8\v\g\h\8\l\o\9\v\a\l\z\5\e\a\e\f\5\p\v\n\r\s\g\o\3\i\3\9\6\m\l\8\a\p\g\b\l\i\y\f\i\2\l\4\e\n\w\q\o\h\2\g\r\a\h\v\3\1\2\3\v\j\4\y\e\3\7\8\g\8\h\n\4\3\4\4\u\v\b\g\8\g\h\p\3\x\w\v\u\n\j\j\s\s\n\x\c\a\6\h\6\s\q\6\o\9\g\m\y\c\8\q\k\z\5\2\w\g\1\i\t\m\g\o\0\s\j\h\a\y\k\q\5\m\x\3\j\m\5\2\m\x\j\6\z\5\w\3\a\0\1\n\a\7\l\o\d\t\1\2\p\t\x\6\0\w\0\g\k\p\3\9\6\u\3\l\m\9\1\8\n\l\l\t\n\o\0\l\t\a\2\h\b\x\2\i\3\n\e\q\2\u\d\l\6\m\d\b\h\u\9\s\s\y\y\p\u\3\0\7\q\e\4\4\c\y\4\u\0\5\a\z\b\d\4\e\1\q\h\j\u\9\r\j\9\o\e\f\j\v\r\x\o\y\a\0\3\y\j\4\h\y\3\m\z\w\y\7\l\1\r\q\8\2\1\j\g\2\n\e\7\m\c\w\t\7\6\q\p\2\c\7\a\x\k\j\z\7\5\j\2\4\8\u\o\l\9\b\w\5\9\n\e\2\y\2\5\f\h\e\l\o\g\t\7\6\1\1\9\r\s\c\l\q\b\j\c\5\4\e\n\m\r\x\7\j\r\9\6\6\6\6\r\g\y\v\8\p\f\u\z\l\7\7\m\a\h\q\h\l\e\b\v\7\a\j\5\v\0\d\5\j\e\d ]] 00:19:24.519 21:32:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:24.519 21:32:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:19:24.519 [2024-07-11 21:32:45.403733] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:24.519 [2024-07-11 21:32:45.403885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70453 ] 00:19:24.776 [2024-07-11 21:32:45.549320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.776 [2024-07-11 21:32:45.648616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.033  Copying: 512/512 [B] (average 500 kBps) 00:19:25.033 00:19:25.033 21:32:45 -- dd/posix.sh@93 -- # [[ ld41ooke6gw0pun2cxsyxmq3j4z9s2wtixgx16jctwijtr7hefv591s5uwdmwvab5c5732zxinslphunsj3vi8uhwtrmvtnck5ctwyena0bnzjgop6qauztfvrcdqnk8vgh8lo9valz5eaef5pvnrsgo3i396ml8apgbliyfi2l4enwqoh2grahv3123vj4ye378g8hn4344uvbg8ghp3xwvunjjssnxca6h6sq6o9gmyc8qkz52wg1itmgo0sjhaykq5mx3jm52mxj6z5w3a01na7lodt12ptx60w0gkp396u3lm918nlltno0lta2hbx2i3neq2udl6mdbhu9ssyypu307qe44cy4u05azbd4e1qhju9rj9oefjvrxoya03yj4hy3mzwy7l1rq821jg2ne7mcwt76qp2c7axkjz75j248uol9bw59ne2y25fhelogt76119rsclqbjc54enmrx7jr96666rgyv8pfuzl77mahqhlebv7aj5v0d5jed == \l\d\4\1\o\o\k\e\6\g\w\0\p\u\n\2\c\x\s\y\x\m\q\3\j\4\z\9\s\2\w\t\i\x\g\x\1\6\j\c\t\w\i\j\t\r\7\h\e\f\v\5\9\1\s\5\u\w\d\m\w\v\a\b\5\c\5\7\3\2\z\x\i\n\s\l\p\h\u\n\s\j\3\v\i\8\u\h\w\t\r\m\v\t\n\c\k\5\c\t\w\y\e\n\a\0\b\n\z\j\g\o\p\6\q\a\u\z\t\f\v\r\c\d\q\n\k\8\v\g\h\8\l\o\9\v\a\l\z\5\e\a\e\f\5\p\v\n\r\s\g\o\3\i\3\9\6\m\l\8\a\p\g\b\l\i\y\f\i\2\l\4\e\n\w\q\o\h\2\g\r\a\h\v\3\1\2\3\v\j\4\y\e\3\7\8\g\8\h\n\4\3\4\4\u\v\b\g\8\g\h\p\3\x\w\v\u\n\j\j\s\s\n\x\c\a\6\h\6\s\q\6\o\9\g\m\y\c\8\q\k\z\5\2\w\g\1\i\t\m\g\o\0\s\j\h\a\y\k\q\5\m\x\3\j\m\5\2\m\x\j\6\z\5\w\3\a\0\1\n\a\7\l\o\d\t\1\2\p\t\x\6\0\w\0\g\k\p\3\9\6\u\3\l\m\9\1\8\n\l\l\t\n\o\0\l\t\a\2\h\b\x\2\i\3\n\e\q\2\u\d\l\6\m\d\b\h\u\9\s\s\y\y\p\u\3\0\7\q\e\4\4\c\y\4\u\0\5\a\z\b\d\4\e\1\q\h\j\u\9\r\j\9\o\e\f\j\v\r\x\o\y\a\0\3\y\j\4\h\y\3\m\z\w\y\7\l\1\r\q\8\2\1\j\g\2\n\e\7\m\c\w\t\7\6\q\p\2\c\7\a\x\k\j\z\7\5\j\2\4\8\u\o\l\9\b\w\5\9\n\e\2\y\2\5\f\h\e\l\o\g\t\7\6\1\1\9\r\s\c\l\q\b\j\c\5\4\e\n\m\r\x\7\j\r\9\6\6\6\6\r\g\y\v\8\p\f\u\z\l\7\7\m\a\h\q\h\l\e\b\v\7\a\j\5\v\0\d\5\j\e\d ]] 00:19:25.033 21:32:45 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:19:25.033 21:32:45 -- dd/posix.sh@86 -- # gen_bytes 512 00:19:25.033 21:32:45 -- dd/common.sh@98 -- # xtrace_disable 00:19:25.033 21:32:45 -- common/autotest_common.sh@10 -- # set +x 00:19:25.033 21:32:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:25.033 21:32:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:19:25.289 [2024-07-11 21:32:46.014700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:25.289 [2024-07-11 21:32:46.014817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70466 ] 00:19:25.289 [2024-07-11 21:32:46.156322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.545 [2024-07-11 21:32:46.256559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.804  Copying: 512/512 [B] (average 500 kBps) 00:19:25.804 00:19:25.804 21:32:46 -- dd/posix.sh@93 -- # [[ p1mmgm0smsc0r5i0mh3wp5ksapker1ozgrkpc7926mts3gs9ld14vutmn1vi0p07vseu4rc3ppwkld4zndzjvidfqlotbmzgyqiuc47pyx0bux99nqrkiuegsc8oia7tlbil9a3oz1xayhj82cgtbt979y978ucwxonz1jkwdxvvroq1gcq17dxha4gmmy2km1ij9s8v20sxqv6nhmcwhchkz2kmwu9ud8mflqca84sa15k7pgdmo9fkglsr0m0xy3qx9t6np0e1vm8hanpj815i2cui9xcw8ftza31mycbygwbzf4s6y6tith9l4z2zlpjrbbmnmah7s069xa3codeldawrfaeeridd6iyij2i725x169wnjmmgguttrtezs9t6ee8mc3ndad3wpfhhgjmm55p1vpqnu5od675snm2u587iddx7hu0ytkkxexz5s83l5pp740bp0ljjc4qthxyl4xiyuzb94vpeg4ego0gej3484dor1rwho38k3zt5 == \p\1\m\m\g\m\0\s\m\s\c\0\r\5\i\0\m\h\3\w\p\5\k\s\a\p\k\e\r\1\o\z\g\r\k\p\c\7\9\2\6\m\t\s\3\g\s\9\l\d\1\4\v\u\t\m\n\1\v\i\0\p\0\7\v\s\e\u\4\r\c\3\p\p\w\k\l\d\4\z\n\d\z\j\v\i\d\f\q\l\o\t\b\m\z\g\y\q\i\u\c\4\7\p\y\x\0\b\u\x\9\9\n\q\r\k\i\u\e\g\s\c\8\o\i\a\7\t\l\b\i\l\9\a\3\o\z\1\x\a\y\h\j\8\2\c\g\t\b\t\9\7\9\y\9\7\8\u\c\w\x\o\n\z\1\j\k\w\d\x\v\v\r\o\q\1\g\c\q\1\7\d\x\h\a\4\g\m\m\y\2\k\m\1\i\j\9\s\8\v\2\0\s\x\q\v\6\n\h\m\c\w\h\c\h\k\z\2\k\m\w\u\9\u\d\8\m\f\l\q\c\a\8\4\s\a\1\5\k\7\p\g\d\m\o\9\f\k\g\l\s\r\0\m\0\x\y\3\q\x\9\t\6\n\p\0\e\1\v\m\8\h\a\n\p\j\8\1\5\i\2\c\u\i\9\x\c\w\8\f\t\z\a\3\1\m\y\c\b\y\g\w\b\z\f\4\s\6\y\6\t\i\t\h\9\l\4\z\2\z\l\p\j\r\b\b\m\n\m\a\h\7\s\0\6\9\x\a\3\c\o\d\e\l\d\a\w\r\f\a\e\e\r\i\d\d\6\i\y\i\j\2\i\7\2\5\x\1\6\9\w\n\j\m\m\g\g\u\t\t\r\t\e\z\s\9\t\6\e\e\8\m\c\3\n\d\a\d\3\w\p\f\h\h\g\j\m\m\5\5\p\1\v\p\q\n\u\5\o\d\6\7\5\s\n\m\2\u\5\8\7\i\d\d\x\7\h\u\0\y\t\k\k\x\e\x\z\5\s\8\3\l\5\p\p\7\4\0\b\p\0\l\j\j\c\4\q\t\h\x\y\l\4\x\i\y\u\z\b\9\4\v\p\e\g\4\e\g\o\0\g\e\j\3\4\8\4\d\o\r\1\r\w\h\o\3\8\k\3\z\t\5 ]] 00:19:25.804 21:32:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:25.804 21:32:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:25.804 [2024-07-11 21:32:46.640057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:25.804 [2024-07-11 21:32:46.640173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70472 ] 00:19:26.063 [2024-07-11 21:32:46.779554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.063 [2024-07-11 21:32:46.879764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.321  Copying: 512/512 [B] (average 500 kBps) 00:19:26.321 00:19:26.321 21:32:47 -- dd/posix.sh@93 -- # [[ p1mmgm0smsc0r5i0mh3wp5ksapker1ozgrkpc7926mts3gs9ld14vutmn1vi0p07vseu4rc3ppwkld4zndzjvidfqlotbmzgyqiuc47pyx0bux99nqrkiuegsc8oia7tlbil9a3oz1xayhj82cgtbt979y978ucwxonz1jkwdxvvroq1gcq17dxha4gmmy2km1ij9s8v20sxqv6nhmcwhchkz2kmwu9ud8mflqca84sa15k7pgdmo9fkglsr0m0xy3qx9t6np0e1vm8hanpj815i2cui9xcw8ftza31mycbygwbzf4s6y6tith9l4z2zlpjrbbmnmah7s069xa3codeldawrfaeeridd6iyij2i725x169wnjmmgguttrtezs9t6ee8mc3ndad3wpfhhgjmm55p1vpqnu5od675snm2u587iddx7hu0ytkkxexz5s83l5pp740bp0ljjc4qthxyl4xiyuzb94vpeg4ego0gej3484dor1rwho38k3zt5 == \p\1\m\m\g\m\0\s\m\s\c\0\r\5\i\0\m\h\3\w\p\5\k\s\a\p\k\e\r\1\o\z\g\r\k\p\c\7\9\2\6\m\t\s\3\g\s\9\l\d\1\4\v\u\t\m\n\1\v\i\0\p\0\7\v\s\e\u\4\r\c\3\p\p\w\k\l\d\4\z\n\d\z\j\v\i\d\f\q\l\o\t\b\m\z\g\y\q\i\u\c\4\7\p\y\x\0\b\u\x\9\9\n\q\r\k\i\u\e\g\s\c\8\o\i\a\7\t\l\b\i\l\9\a\3\o\z\1\x\a\y\h\j\8\2\c\g\t\b\t\9\7\9\y\9\7\8\u\c\w\x\o\n\z\1\j\k\w\d\x\v\v\r\o\q\1\g\c\q\1\7\d\x\h\a\4\g\m\m\y\2\k\m\1\i\j\9\s\8\v\2\0\s\x\q\v\6\n\h\m\c\w\h\c\h\k\z\2\k\m\w\u\9\u\d\8\m\f\l\q\c\a\8\4\s\a\1\5\k\7\p\g\d\m\o\9\f\k\g\l\s\r\0\m\0\x\y\3\q\x\9\t\6\n\p\0\e\1\v\m\8\h\a\n\p\j\8\1\5\i\2\c\u\i\9\x\c\w\8\f\t\z\a\3\1\m\y\c\b\y\g\w\b\z\f\4\s\6\y\6\t\i\t\h\9\l\4\z\2\z\l\p\j\r\b\b\m\n\m\a\h\7\s\0\6\9\x\a\3\c\o\d\e\l\d\a\w\r\f\a\e\e\r\i\d\d\6\i\y\i\j\2\i\7\2\5\x\1\6\9\w\n\j\m\m\g\g\u\t\t\r\t\e\z\s\9\t\6\e\e\8\m\c\3\n\d\a\d\3\w\p\f\h\h\g\j\m\m\5\5\p\1\v\p\q\n\u\5\o\d\6\7\5\s\n\m\2\u\5\8\7\i\d\d\x\7\h\u\0\y\t\k\k\x\e\x\z\5\s\8\3\l\5\p\p\7\4\0\b\p\0\l\j\j\c\4\q\t\h\x\y\l\4\x\i\y\u\z\b\9\4\v\p\e\g\4\e\g\o\0\g\e\j\3\4\8\4\d\o\r\1\r\w\h\o\3\8\k\3\z\t\5 ]] 00:19:26.321 21:32:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:26.321 21:32:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:19:26.321 [2024-07-11 21:32:47.258641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:26.321 [2024-07-11 21:32:47.258791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70481 ] 00:19:26.632 [2024-07-11 21:32:47.404711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.632 [2024-07-11 21:32:47.512977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.891  Copying: 512/512 [B] (average 250 kBps) 00:19:26.891 00:19:26.892 21:32:47 -- dd/posix.sh@93 -- # [[ p1mmgm0smsc0r5i0mh3wp5ksapker1ozgrkpc7926mts3gs9ld14vutmn1vi0p07vseu4rc3ppwkld4zndzjvidfqlotbmzgyqiuc47pyx0bux99nqrkiuegsc8oia7tlbil9a3oz1xayhj82cgtbt979y978ucwxonz1jkwdxvvroq1gcq17dxha4gmmy2km1ij9s8v20sxqv6nhmcwhchkz2kmwu9ud8mflqca84sa15k7pgdmo9fkglsr0m0xy3qx9t6np0e1vm8hanpj815i2cui9xcw8ftza31mycbygwbzf4s6y6tith9l4z2zlpjrbbmnmah7s069xa3codeldawrfaeeridd6iyij2i725x169wnjmmgguttrtezs9t6ee8mc3ndad3wpfhhgjmm55p1vpqnu5od675snm2u587iddx7hu0ytkkxexz5s83l5pp740bp0ljjc4qthxyl4xiyuzb94vpeg4ego0gej3484dor1rwho38k3zt5 == \p\1\m\m\g\m\0\s\m\s\c\0\r\5\i\0\m\h\3\w\p\5\k\s\a\p\k\e\r\1\o\z\g\r\k\p\c\7\9\2\6\m\t\s\3\g\s\9\l\d\1\4\v\u\t\m\n\1\v\i\0\p\0\7\v\s\e\u\4\r\c\3\p\p\w\k\l\d\4\z\n\d\z\j\v\i\d\f\q\l\o\t\b\m\z\g\y\q\i\u\c\4\7\p\y\x\0\b\u\x\9\9\n\q\r\k\i\u\e\g\s\c\8\o\i\a\7\t\l\b\i\l\9\a\3\o\z\1\x\a\y\h\j\8\2\c\g\t\b\t\9\7\9\y\9\7\8\u\c\w\x\o\n\z\1\j\k\w\d\x\v\v\r\o\q\1\g\c\q\1\7\d\x\h\a\4\g\m\m\y\2\k\m\1\i\j\9\s\8\v\2\0\s\x\q\v\6\n\h\m\c\w\h\c\h\k\z\2\k\m\w\u\9\u\d\8\m\f\l\q\c\a\8\4\s\a\1\5\k\7\p\g\d\m\o\9\f\k\g\l\s\r\0\m\0\x\y\3\q\x\9\t\6\n\p\0\e\1\v\m\8\h\a\n\p\j\8\1\5\i\2\c\u\i\9\x\c\w\8\f\t\z\a\3\1\m\y\c\b\y\g\w\b\z\f\4\s\6\y\6\t\i\t\h\9\l\4\z\2\z\l\p\j\r\b\b\m\n\m\a\h\7\s\0\6\9\x\a\3\c\o\d\e\l\d\a\w\r\f\a\e\e\r\i\d\d\6\i\y\i\j\2\i\7\2\5\x\1\6\9\w\n\j\m\m\g\g\u\t\t\r\t\e\z\s\9\t\6\e\e\8\m\c\3\n\d\a\d\3\w\p\f\h\h\g\j\m\m\5\5\p\1\v\p\q\n\u\5\o\d\6\7\5\s\n\m\2\u\5\8\7\i\d\d\x\7\h\u\0\y\t\k\k\x\e\x\z\5\s\8\3\l\5\p\p\7\4\0\b\p\0\l\j\j\c\4\q\t\h\x\y\l\4\x\i\y\u\z\b\9\4\v\p\e\g\4\e\g\o\0\g\e\j\3\4\8\4\d\o\r\1\r\w\h\o\3\8\k\3\z\t\5 ]] 00:19:26.892 21:32:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:26.892 21:32:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:19:27.150 [2024-07-11 21:32:47.869719] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:27.150 [2024-07-11 21:32:47.869827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70494 ] 00:19:27.150 [2024-07-11 21:32:48.005949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.408 [2024-07-11 21:32:48.103980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.666  Copying: 512/512 [B] (average 500 kBps) 00:19:27.666 00:19:27.666 ************************************ 00:19:27.666 END TEST dd_flags_misc 00:19:27.666 ************************************ 00:19:27.666 21:32:48 -- dd/posix.sh@93 -- # [[ p1mmgm0smsc0r5i0mh3wp5ksapker1ozgrkpc7926mts3gs9ld14vutmn1vi0p07vseu4rc3ppwkld4zndzjvidfqlotbmzgyqiuc47pyx0bux99nqrkiuegsc8oia7tlbil9a3oz1xayhj82cgtbt979y978ucwxonz1jkwdxvvroq1gcq17dxha4gmmy2km1ij9s8v20sxqv6nhmcwhchkz2kmwu9ud8mflqca84sa15k7pgdmo9fkglsr0m0xy3qx9t6np0e1vm8hanpj815i2cui9xcw8ftza31mycbygwbzf4s6y6tith9l4z2zlpjrbbmnmah7s069xa3codeldawrfaeeridd6iyij2i725x169wnjmmgguttrtezs9t6ee8mc3ndad3wpfhhgjmm55p1vpqnu5od675snm2u587iddx7hu0ytkkxexz5s83l5pp740bp0ljjc4qthxyl4xiyuzb94vpeg4ego0gej3484dor1rwho38k3zt5 == \p\1\m\m\g\m\0\s\m\s\c\0\r\5\i\0\m\h\3\w\p\5\k\s\a\p\k\e\r\1\o\z\g\r\k\p\c\7\9\2\6\m\t\s\3\g\s\9\l\d\1\4\v\u\t\m\n\1\v\i\0\p\0\7\v\s\e\u\4\r\c\3\p\p\w\k\l\d\4\z\n\d\z\j\v\i\d\f\q\l\o\t\b\m\z\g\y\q\i\u\c\4\7\p\y\x\0\b\u\x\9\9\n\q\r\k\i\u\e\g\s\c\8\o\i\a\7\t\l\b\i\l\9\a\3\o\z\1\x\a\y\h\j\8\2\c\g\t\b\t\9\7\9\y\9\7\8\u\c\w\x\o\n\z\1\j\k\w\d\x\v\v\r\o\q\1\g\c\q\1\7\d\x\h\a\4\g\m\m\y\2\k\m\1\i\j\9\s\8\v\2\0\s\x\q\v\6\n\h\m\c\w\h\c\h\k\z\2\k\m\w\u\9\u\d\8\m\f\l\q\c\a\8\4\s\a\1\5\k\7\p\g\d\m\o\9\f\k\g\l\s\r\0\m\0\x\y\3\q\x\9\t\6\n\p\0\e\1\v\m\8\h\a\n\p\j\8\1\5\i\2\c\u\i\9\x\c\w\8\f\t\z\a\3\1\m\y\c\b\y\g\w\b\z\f\4\s\6\y\6\t\i\t\h\9\l\4\z\2\z\l\p\j\r\b\b\m\n\m\a\h\7\s\0\6\9\x\a\3\c\o\d\e\l\d\a\w\r\f\a\e\e\r\i\d\d\6\i\y\i\j\2\i\7\2\5\x\1\6\9\w\n\j\m\m\g\g\u\t\t\r\t\e\z\s\9\t\6\e\e\8\m\c\3\n\d\a\d\3\w\p\f\h\h\g\j\m\m\5\5\p\1\v\p\q\n\u\5\o\d\6\7\5\s\n\m\2\u\5\8\7\i\d\d\x\7\h\u\0\y\t\k\k\x\e\x\z\5\s\8\3\l\5\p\p\7\4\0\b\p\0\l\j\j\c\4\q\t\h\x\y\l\4\x\i\y\u\z\b\9\4\v\p\e\g\4\e\g\o\0\g\e\j\3\4\8\4\d\o\r\1\r\w\h\o\3\8\k\3\z\t\5 ]] 00:19:27.666 00:19:27.666 real 0m4.876s 00:19:27.666 user 0m2.708s 00:19:27.666 sys 0m1.176s 00:19:27.666 21:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.666 21:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:27.666 21:32:48 -- dd/posix.sh@131 -- # tests_forced_aio 00:19:27.666 21:32:48 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:19:27.666 * Second test run, disabling liburing, forcing AIO 00:19:27.666 21:32:48 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:19:27.666 21:32:48 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:19:27.666 21:32:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:27.666 21:32:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:27.666 21:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:27.666 ************************************ 00:19:27.666 START TEST dd_flag_append_forced_aio 00:19:27.666 ************************************ 00:19:27.666 21:32:48 -- common/autotest_common.sh@1104 -- # append 00:19:27.666 21:32:48 -- dd/posix.sh@16 -- # local dump0 00:19:27.666 21:32:48 -- dd/posix.sh@17 -- # local dump1 00:19:27.666 21:32:48 -- dd/posix.sh@19 -- # gen_bytes 32 00:19:27.666 21:32:48 -- dd/common.sh@98 -- # xtrace_disable 00:19:27.666 21:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:27.666 21:32:48 -- dd/posix.sh@19 -- # dump0=zwnuwg3rg6v72u028ix6kfokai2d9i1z 00:19:27.666 21:32:48 -- dd/posix.sh@20 -- # gen_bytes 32 00:19:27.666 21:32:48 -- dd/common.sh@98 -- # xtrace_disable 00:19:27.666 21:32:48 -- common/autotest_common.sh@10 -- # set +x 00:19:27.666 21:32:48 -- dd/posix.sh@20 -- # dump1=ufghi5ohus9v1kruq9cutt757vm4nal9 00:19:27.666 21:32:48 -- dd/posix.sh@22 -- # printf %s zwnuwg3rg6v72u028ix6kfokai2d9i1z 00:19:27.666 21:32:48 -- dd/posix.sh@23 -- # printf %s ufghi5ohus9v1kruq9cutt757vm4nal9 00:19:27.666 21:32:48 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:19:27.666 [2024-07-11 21:32:48.540465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:27.666 [2024-07-11 21:32:48.540658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70515 ] 00:19:27.924 [2024-07-11 21:32:48.680971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.924 [2024-07-11 21:32:48.780437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.181  Copying: 32/32 [B] (average 31 kBps) 00:19:28.181 00:19:28.181 ************************************ 00:19:28.181 END TEST dd_flag_append_forced_aio 00:19:28.181 ************************************ 00:19:28.181 21:32:49 -- dd/posix.sh@27 -- # [[ ufghi5ohus9v1kruq9cutt757vm4nal9zwnuwg3rg6v72u028ix6kfokai2d9i1z == \u\f\g\h\i\5\o\h\u\s\9\v\1\k\r\u\q\9\c\u\t\t\7\5\7\v\m\4\n\a\l\9\z\w\n\u\w\g\3\r\g\6\v\7\2\u\0\2\8\i\x\6\k\f\o\k\a\i\2\d\9\i\1\z ]] 00:19:28.181 00:19:28.181 real 0m0.619s 00:19:28.181 user 0m0.334s 00:19:28.181 sys 0m0.161s 00:19:28.181 21:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.181 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:19:28.439 21:32:49 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:19:28.439 21:32:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:28.439 21:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.439 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:19:28.439 ************************************ 00:19:28.439 START TEST dd_flag_directory_forced_aio 00:19:28.439 ************************************ 00:19:28.439 21:32:49 -- common/autotest_common.sh@1104 -- # directory 00:19:28.439 21:32:49 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:28.439 21:32:49 -- common/autotest_common.sh@640 -- # local es=0 00:19:28.439 21:32:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:28.439 21:32:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.439 21:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:28.439 21:32:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.439 21:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:28.439 21:32:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.439 21:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:28.439 21:32:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.439 21:32:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:28.439 21:32:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:28.439 [2024-07-11 21:32:49.200295] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:28.439 [2024-07-11 21:32:49.200434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70547 ] 00:19:28.439 [2024-07-11 21:32:49.346389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.697 [2024-07-11 21:32:49.444617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.697 [2024-07-11 21:32:49.533012] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:28.697 [2024-07-11 21:32:49.533082] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:28.697 [2024-07-11 21:32:49.533098] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:28.955 [2024-07-11 21:32:49.650233] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:28.955 21:32:49 -- common/autotest_common.sh@643 -- # es=236 00:19:28.955 21:32:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:28.955 21:32:49 -- common/autotest_common.sh@652 -- # es=108 00:19:28.955 21:32:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:28.955 21:32:49 -- common/autotest_common.sh@660 -- # es=1 00:19:28.955 21:32:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:28.955 21:32:49 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:28.955 21:32:49 -- common/autotest_common.sh@640 -- # local es=0 00:19:28.955 21:32:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:28.955 21:32:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.955 21:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:28.955 21:32:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.955 21:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:28.955 21:32:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.955 21:32:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:28.955 21:32:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.955 21:32:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:28.955 21:32:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:19:28.955 [2024-07-11 21:32:49.809887] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:28.955 [2024-07-11 21:32:49.810035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70555 ] 00:19:29.213 [2024-07-11 21:32:49.956926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.213 [2024-07-11 21:32:50.055757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.213 [2024-07-11 21:32:50.144283] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:29.213 [2024-07-11 21:32:50.144348] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:19:29.213 [2024-07-11 21:32:50.144365] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:29.471 [2024-07-11 21:32:50.260469] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:29.471 21:32:50 -- common/autotest_common.sh@643 -- # es=236 00:19:29.471 21:32:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:29.471 21:32:50 -- common/autotest_common.sh@652 -- # es=108 00:19:29.471 21:32:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:29.471 21:32:50 -- common/autotest_common.sh@660 -- # es=1 00:19:29.471 21:32:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:29.471 00:19:29.471 real 0m1.210s 00:19:29.471 user 0m0.697s 00:19:29.471 sys 0m0.296s 00:19:29.471 21:32:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.471 ************************************ 00:19:29.471 END TEST dd_flag_directory_forced_aio 00:19:29.471 ************************************ 00:19:29.471 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:19:29.471 21:32:50 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:19:29.471 21:32:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:29.471 21:32:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:29.471 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:19:29.471 ************************************ 00:19:29.471 START TEST dd_flag_nofollow_forced_aio 00:19:29.471 ************************************ 00:19:29.471 21:32:50 -- common/autotest_common.sh@1104 -- # nofollow 00:19:29.471 21:32:50 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:29.471 21:32:50 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:29.471 21:32:50 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:29.471 21:32:50 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:29.471 21:32:50 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:29.471 21:32:50 -- common/autotest_common.sh@640 -- # local es=0 00:19:29.471 21:32:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:29.471 21:32:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:29.471 21:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.471 21:32:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:29.471 21:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.471 21:32:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:29.471 21:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.471 21:32:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:29.471 21:32:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:29.471 21:32:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:29.729 [2024-07-11 21:32:50.458959] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:29.729 [2024-07-11 21:32:50.459076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:19:29.729 [2024-07-11 21:32:50.592459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.986 [2024-07-11 21:32:50.693264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.986 [2024-07-11 21:32:50.783371] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:29.986 [2024-07-11 21:32:50.783443] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:19:29.986 [2024-07-11 21:32:50.783460] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:29.986 [2024-07-11 21:32:50.899620] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:30.316 21:32:50 -- common/autotest_common.sh@643 -- # es=216 00:19:30.316 21:32:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:30.316 21:32:50 -- common/autotest_common.sh@652 -- # es=88 00:19:30.316 21:32:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:30.316 21:32:50 -- common/autotest_common.sh@660 -- # es=1 00:19:30.316 21:32:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:30.316 21:32:50 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:30.316 21:32:50 -- common/autotest_common.sh@640 -- # local es=0 00:19:30.316 21:32:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:30.316 21:32:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.316 21:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:30.316 21:32:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.316 21:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:30.316 21:32:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.316 21:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:30.316 21:32:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.316 21:32:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:30.316 21:32:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:19:30.316 [2024-07-11 21:32:51.062648] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:30.316 [2024-07-11 21:32:51.062795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70600 ] 00:19:30.316 [2024-07-11 21:32:51.203773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.573 [2024-07-11 21:32:51.304385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.573 [2024-07-11 21:32:51.392868] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:30.573 [2024-07-11 21:32:51.392939] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:19:30.573 [2024-07-11 21:32:51.392956] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:30.573 [2024-07-11 21:32:51.506415] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:30.832 21:32:51 -- common/autotest_common.sh@643 -- # es=216 00:19:30.832 21:32:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:30.832 21:32:51 -- common/autotest_common.sh@652 -- # es=88 00:19:30.832 21:32:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:30.832 21:32:51 -- common/autotest_common.sh@660 -- # es=1 00:19:30.832 21:32:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:30.832 21:32:51 -- dd/posix.sh@46 -- # gen_bytes 512 00:19:30.832 21:32:51 -- dd/common.sh@98 -- # xtrace_disable 00:19:30.832 21:32:51 -- common/autotest_common.sh@10 -- # set +x 00:19:30.832 21:32:51 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:30.832 [2024-07-11 21:32:51.658948] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:30.832 [2024-07-11 21:32:51.659299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70602 ] 00:19:31.091 [2024-07-11 21:32:51.795744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.091 [2024-07-11 21:32:51.895672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.348  Copying: 512/512 [B] (average 500 kBps) 00:19:31.348 00:19:31.348 21:32:52 -- dd/posix.sh@49 -- # [[ zkf3xuw45q9plds2t4kmt6xncn4lo349sodw3m5tl9zp8ykkzk7ogyd60vdlmvskvs4e5p13q8xexriqrbf4wk37cepth388goglnq8o9p677kk1n76gmiwnzfgunp33eidzxctonl3jrjmo6bcgis27g2zzqfq09g0px90dkzga66v6ev72jh1o6tnscb41urseehzv2vyty0ydiwuu3fz3g06kwvci06mnob4f46vw44bf8fhui5u9e4t9vul210ac70njdgapv9hc43ita07rludflw8bybx1u8p5edqylqcdbbknfxp5sxa7fqa8yyz3kyfg1j0retwgd0mu16g0vm0g33rqi671afbbqljhnfrix43mdy6fk8vh6wq927ev3l7kqz639srs71m3kp3t79izs9q8l6lypbo98v9q2vun1vnbhkqzp4boresnhntdibz2jiwk5tt0t9g03kzb6m6cuz2aweygy3wghl148qkshy2j9k80eslp0pdv == \z\k\f\3\x\u\w\4\5\q\9\p\l\d\s\2\t\4\k\m\t\6\x\n\c\n\4\l\o\3\4\9\s\o\d\w\3\m\5\t\l\9\z\p\8\y\k\k\z\k\7\o\g\y\d\6\0\v\d\l\m\v\s\k\v\s\4\e\5\p\1\3\q\8\x\e\x\r\i\q\r\b\f\4\w\k\3\7\c\e\p\t\h\3\8\8\g\o\g\l\n\q\8\o\9\p\6\7\7\k\k\1\n\7\6\g\m\i\w\n\z\f\g\u\n\p\3\3\e\i\d\z\x\c\t\o\n\l\3\j\r\j\m\o\6\b\c\g\i\s\2\7\g\2\z\z\q\f\q\0\9\g\0\p\x\9\0\d\k\z\g\a\6\6\v\6\e\v\7\2\j\h\1\o\6\t\n\s\c\b\4\1\u\r\s\e\e\h\z\v\2\v\y\t\y\0\y\d\i\w\u\u\3\f\z\3\g\0\6\k\w\v\c\i\0\6\m\n\o\b\4\f\4\6\v\w\4\4\b\f\8\f\h\u\i\5\u\9\e\4\t\9\v\u\l\2\1\0\a\c\7\0\n\j\d\g\a\p\v\9\h\c\4\3\i\t\a\0\7\r\l\u\d\f\l\w\8\b\y\b\x\1\u\8\p\5\e\d\q\y\l\q\c\d\b\b\k\n\f\x\p\5\s\x\a\7\f\q\a\8\y\y\z\3\k\y\f\g\1\j\0\r\e\t\w\g\d\0\m\u\1\6\g\0\v\m\0\g\3\3\r\q\i\6\7\1\a\f\b\b\q\l\j\h\n\f\r\i\x\4\3\m\d\y\6\f\k\8\v\h\6\w\q\9\2\7\e\v\3\l\7\k\q\z\6\3\9\s\r\s\7\1\m\3\k\p\3\t\7\9\i\z\s\9\q\8\l\6\l\y\p\b\o\9\8\v\9\q\2\v\u\n\1\v\n\b\h\k\q\z\p\4\b\o\r\e\s\n\h\n\t\d\i\b\z\2\j\i\w\k\5\t\t\0\t\9\g\0\3\k\z\b\6\m\6\c\u\z\2\a\w\e\y\g\y\3\w\g\h\l\1\4\8\q\k\s\h\y\2\j\9\k\8\0\e\s\l\p\0\p\d\v ]] 00:19:31.348 00:19:31.348 real 0m1.821s 00:19:31.348 user 0m1.022s 00:19:31.348 sys 0m0.467s 00:19:31.348 21:32:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:31.348 ************************************ 00:19:31.348 END TEST dd_flag_nofollow_forced_aio 00:19:31.348 ************************************ 00:19:31.348 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:19:31.348 21:32:52 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:19:31.348 21:32:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:31.348 21:32:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:31.348 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:19:31.348 ************************************ 00:19:31.348 START TEST dd_flag_noatime_forced_aio 00:19:31.348 ************************************ 00:19:31.348 21:32:52 -- common/autotest_common.sh@1104 -- # noatime 00:19:31.348 21:32:52 -- dd/posix.sh@53 -- # local atime_if 00:19:31.348 21:32:52 -- dd/posix.sh@54 -- # local atime_of 00:19:31.348 21:32:52 -- dd/posix.sh@58 -- # gen_bytes 512 00:19:31.348 21:32:52 -- dd/common.sh@98 -- # xtrace_disable 00:19:31.348 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:19:31.348 21:32:52 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:31.348 21:32:52 -- dd/posix.sh@60 -- # atime_if=1720733571 00:19:31.348 21:32:52 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:31.348 21:32:52 -- dd/posix.sh@61 -- # atime_of=1720733572 00:19:31.348 21:32:52 -- dd/posix.sh@66 -- # sleep 1 00:19:32.720 21:32:53 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:32.720 [2024-07-11 21:32:53.352219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:32.720 [2024-07-11 21:32:53.352370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70648 ] 00:19:32.720 [2024-07-11 21:32:53.497416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.720 [2024-07-11 21:32:53.613775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.978  Copying: 512/512 [B] (average 500 kBps) 00:19:32.978 00:19:32.978 21:32:53 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:32.978 21:32:53 -- dd/posix.sh@69 -- # (( atime_if == 1720733571 )) 00:19:32.978 21:32:53 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:33.236 21:32:53 -- dd/posix.sh@70 -- # (( atime_of == 1720733572 )) 00:19:33.236 21:32:53 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:33.236 [2024-07-11 21:32:53.974211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:33.236 [2024-07-11 21:32:53.974330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70660 ] 00:19:33.236 [2024-07-11 21:32:54.109985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.494 [2024-07-11 21:32:54.207406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.751  Copying: 512/512 [B] (average 500 kBps) 00:19:33.751 00:19:33.751 21:32:54 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:33.751 21:32:54 -- dd/posix.sh@73 -- # (( atime_if < 1720733574 )) 00:19:33.751 ************************************ 00:19:33.751 END TEST dd_flag_noatime_forced_aio 00:19:33.751 ************************************ 00:19:33.751 00:19:33.751 real 0m2.297s 00:19:33.751 user 0m0.720s 00:19:33.751 sys 0m0.329s 00:19:33.751 21:32:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.751 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:33.751 21:32:54 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:19:33.751 21:32:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:33.751 21:32:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:33.751 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:33.751 ************************************ 00:19:33.751 START TEST dd_flags_misc_forced_aio 00:19:33.751 ************************************ 00:19:33.751 21:32:54 -- common/autotest_common.sh@1104 -- # io 00:19:33.751 21:32:54 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:19:33.751 21:32:54 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:19:33.751 21:32:54 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:19:33.751 21:32:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:19:33.751 21:32:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:19:33.751 21:32:54 -- dd/common.sh@98 -- # xtrace_disable 00:19:33.751 21:32:54 -- common/autotest_common.sh@10 -- # set +x 00:19:33.751 21:32:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:33.752 21:32:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:19:33.752 [2024-07-11 21:32:54.680168] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:33.752 [2024-07-11 21:32:54.680614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70686 ] 00:19:34.010 [2024-07-11 21:32:54.820140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.010 [2024-07-11 21:32:54.923343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.527  Copying: 512/512 [B] (average 500 kBps) 00:19:34.527 00:19:34.527 21:32:55 -- dd/posix.sh@93 -- # [[ k1dqaxn9r0vc3hmsohrdsdc9461jxv8gqa4xsdogzkio85uk771mll0ug4mynxln2u3xyzm0h615klebaht2jlravgz117jim8uwqejs2n8smd0vsavrx8aw6w0f1vkexjo8vxl678advlexta86kgirb3ggzu9w8to8hn9185lc2tqhvxzfgrdalaio9b1zyviq7m08ow74km5zxzs3xler47ksdjqyx1oe2aqbnvwswin35vbxfm8afvkvgr13mxr7pdbjcn16x5g98kdar11idhcd44m8wi9owtuapiraqje1ghehkiexs0vjejadn28kyke92y28ukmf55hb1dko9wi8brp2ngt6xa93gjiifeusz0107ivqvmps0mg6yriml5js7pj2hrqo9dfuncswqp58bg1i4d2thshr0qussb7pjon7x8hgv0s23xas0ykwaopxcvcnyuch0n6poa1veqsdcdjxrcovcdls1m4eepugbd2l5dsrpj61v1gj == \k\1\d\q\a\x\n\9\r\0\v\c\3\h\m\s\o\h\r\d\s\d\c\9\4\6\1\j\x\v\8\g\q\a\4\x\s\d\o\g\z\k\i\o\8\5\u\k\7\7\1\m\l\l\0\u\g\4\m\y\n\x\l\n\2\u\3\x\y\z\m\0\h\6\1\5\k\l\e\b\a\h\t\2\j\l\r\a\v\g\z\1\1\7\j\i\m\8\u\w\q\e\j\s\2\n\8\s\m\d\0\v\s\a\v\r\x\8\a\w\6\w\0\f\1\v\k\e\x\j\o\8\v\x\l\6\7\8\a\d\v\l\e\x\t\a\8\6\k\g\i\r\b\3\g\g\z\u\9\w\8\t\o\8\h\n\9\1\8\5\l\c\2\t\q\h\v\x\z\f\g\r\d\a\l\a\i\o\9\b\1\z\y\v\i\q\7\m\0\8\o\w\7\4\k\m\5\z\x\z\s\3\x\l\e\r\4\7\k\s\d\j\q\y\x\1\o\e\2\a\q\b\n\v\w\s\w\i\n\3\5\v\b\x\f\m\8\a\f\v\k\v\g\r\1\3\m\x\r\7\p\d\b\j\c\n\1\6\x\5\g\9\8\k\d\a\r\1\1\i\d\h\c\d\4\4\m\8\w\i\9\o\w\t\u\a\p\i\r\a\q\j\e\1\g\h\e\h\k\i\e\x\s\0\v\j\e\j\a\d\n\2\8\k\y\k\e\9\2\y\2\8\u\k\m\f\5\5\h\b\1\d\k\o\9\w\i\8\b\r\p\2\n\g\t\6\x\a\9\3\g\j\i\i\f\e\u\s\z\0\1\0\7\i\v\q\v\m\p\s\0\m\g\6\y\r\i\m\l\5\j\s\7\p\j\2\h\r\q\o\9\d\f\u\n\c\s\w\q\p\5\8\b\g\1\i\4\d\2\t\h\s\h\r\0\q\u\s\s\b\7\p\j\o\n\7\x\8\h\g\v\0\s\2\3\x\a\s\0\y\k\w\a\o\p\x\c\v\c\n\y\u\c\h\0\n\6\p\o\a\1\v\e\q\s\d\c\d\j\x\r\c\o\v\c\d\l\s\1\m\4\e\e\p\u\g\b\d\2\l\5\d\s\r\p\j\6\1\v\1\g\j ]] 00:19:34.527 21:32:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:34.527 21:32:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:34.527 [2024-07-11 21:32:55.297984] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:34.527 [2024-07-11 21:32:55.298133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:19:34.527 [2024-07-11 21:32:55.442325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.786 [2024-07-11 21:32:55.541558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.045  Copying: 512/512 [B] (average 500 kBps) 00:19:35.045 00:19:35.045 21:32:55 -- dd/posix.sh@93 -- # [[ k1dqaxn9r0vc3hmsohrdsdc9461jxv8gqa4xsdogzkio85uk771mll0ug4mynxln2u3xyzm0h615klebaht2jlravgz117jim8uwqejs2n8smd0vsavrx8aw6w0f1vkexjo8vxl678advlexta86kgirb3ggzu9w8to8hn9185lc2tqhvxzfgrdalaio9b1zyviq7m08ow74km5zxzs3xler47ksdjqyx1oe2aqbnvwswin35vbxfm8afvkvgr13mxr7pdbjcn16x5g98kdar11idhcd44m8wi9owtuapiraqje1ghehkiexs0vjejadn28kyke92y28ukmf55hb1dko9wi8brp2ngt6xa93gjiifeusz0107ivqvmps0mg6yriml5js7pj2hrqo9dfuncswqp58bg1i4d2thshr0qussb7pjon7x8hgv0s23xas0ykwaopxcvcnyuch0n6poa1veqsdcdjxrcovcdls1m4eepugbd2l5dsrpj61v1gj == \k\1\d\q\a\x\n\9\r\0\v\c\3\h\m\s\o\h\r\d\s\d\c\9\4\6\1\j\x\v\8\g\q\a\4\x\s\d\o\g\z\k\i\o\8\5\u\k\7\7\1\m\l\l\0\u\g\4\m\y\n\x\l\n\2\u\3\x\y\z\m\0\h\6\1\5\k\l\e\b\a\h\t\2\j\l\r\a\v\g\z\1\1\7\j\i\m\8\u\w\q\e\j\s\2\n\8\s\m\d\0\v\s\a\v\r\x\8\a\w\6\w\0\f\1\v\k\e\x\j\o\8\v\x\l\6\7\8\a\d\v\l\e\x\t\a\8\6\k\g\i\r\b\3\g\g\z\u\9\w\8\t\o\8\h\n\9\1\8\5\l\c\2\t\q\h\v\x\z\f\g\r\d\a\l\a\i\o\9\b\1\z\y\v\i\q\7\m\0\8\o\w\7\4\k\m\5\z\x\z\s\3\x\l\e\r\4\7\k\s\d\j\q\y\x\1\o\e\2\a\q\b\n\v\w\s\w\i\n\3\5\v\b\x\f\m\8\a\f\v\k\v\g\r\1\3\m\x\r\7\p\d\b\j\c\n\1\6\x\5\g\9\8\k\d\a\r\1\1\i\d\h\c\d\4\4\m\8\w\i\9\o\w\t\u\a\p\i\r\a\q\j\e\1\g\h\e\h\k\i\e\x\s\0\v\j\e\j\a\d\n\2\8\k\y\k\e\9\2\y\2\8\u\k\m\f\5\5\h\b\1\d\k\o\9\w\i\8\b\r\p\2\n\g\t\6\x\a\9\3\g\j\i\i\f\e\u\s\z\0\1\0\7\i\v\q\v\m\p\s\0\m\g\6\y\r\i\m\l\5\j\s\7\p\j\2\h\r\q\o\9\d\f\u\n\c\s\w\q\p\5\8\b\g\1\i\4\d\2\t\h\s\h\r\0\q\u\s\s\b\7\p\j\o\n\7\x\8\h\g\v\0\s\2\3\x\a\s\0\y\k\w\a\o\p\x\c\v\c\n\y\u\c\h\0\n\6\p\o\a\1\v\e\q\s\d\c\d\j\x\r\c\o\v\c\d\l\s\1\m\4\e\e\p\u\g\b\d\2\l\5\d\s\r\p\j\6\1\v\1\g\j ]] 00:19:35.045 21:32:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:35.045 21:32:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:19:35.045 [2024-07-11 21:32:55.911758] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:35.045 [2024-07-11 21:32:55.911944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70701 ] 00:19:35.304 [2024-07-11 21:32:56.058296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.304 [2024-07-11 21:32:56.156628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.562  Copying: 512/512 [B] (average 250 kBps) 00:19:35.562 00:19:35.562 21:32:56 -- dd/posix.sh@93 -- # [[ k1dqaxn9r0vc3hmsohrdsdc9461jxv8gqa4xsdogzkio85uk771mll0ug4mynxln2u3xyzm0h615klebaht2jlravgz117jim8uwqejs2n8smd0vsavrx8aw6w0f1vkexjo8vxl678advlexta86kgirb3ggzu9w8to8hn9185lc2tqhvxzfgrdalaio9b1zyviq7m08ow74km5zxzs3xler47ksdjqyx1oe2aqbnvwswin35vbxfm8afvkvgr13mxr7pdbjcn16x5g98kdar11idhcd44m8wi9owtuapiraqje1ghehkiexs0vjejadn28kyke92y28ukmf55hb1dko9wi8brp2ngt6xa93gjiifeusz0107ivqvmps0mg6yriml5js7pj2hrqo9dfuncswqp58bg1i4d2thshr0qussb7pjon7x8hgv0s23xas0ykwaopxcvcnyuch0n6poa1veqsdcdjxrcovcdls1m4eepugbd2l5dsrpj61v1gj == \k\1\d\q\a\x\n\9\r\0\v\c\3\h\m\s\o\h\r\d\s\d\c\9\4\6\1\j\x\v\8\g\q\a\4\x\s\d\o\g\z\k\i\o\8\5\u\k\7\7\1\m\l\l\0\u\g\4\m\y\n\x\l\n\2\u\3\x\y\z\m\0\h\6\1\5\k\l\e\b\a\h\t\2\j\l\r\a\v\g\z\1\1\7\j\i\m\8\u\w\q\e\j\s\2\n\8\s\m\d\0\v\s\a\v\r\x\8\a\w\6\w\0\f\1\v\k\e\x\j\o\8\v\x\l\6\7\8\a\d\v\l\e\x\t\a\8\6\k\g\i\r\b\3\g\g\z\u\9\w\8\t\o\8\h\n\9\1\8\5\l\c\2\t\q\h\v\x\z\f\g\r\d\a\l\a\i\o\9\b\1\z\y\v\i\q\7\m\0\8\o\w\7\4\k\m\5\z\x\z\s\3\x\l\e\r\4\7\k\s\d\j\q\y\x\1\o\e\2\a\q\b\n\v\w\s\w\i\n\3\5\v\b\x\f\m\8\a\f\v\k\v\g\r\1\3\m\x\r\7\p\d\b\j\c\n\1\6\x\5\g\9\8\k\d\a\r\1\1\i\d\h\c\d\4\4\m\8\w\i\9\o\w\t\u\a\p\i\r\a\q\j\e\1\g\h\e\h\k\i\e\x\s\0\v\j\e\j\a\d\n\2\8\k\y\k\e\9\2\y\2\8\u\k\m\f\5\5\h\b\1\d\k\o\9\w\i\8\b\r\p\2\n\g\t\6\x\a\9\3\g\j\i\i\f\e\u\s\z\0\1\0\7\i\v\q\v\m\p\s\0\m\g\6\y\r\i\m\l\5\j\s\7\p\j\2\h\r\q\o\9\d\f\u\n\c\s\w\q\p\5\8\b\g\1\i\4\d\2\t\h\s\h\r\0\q\u\s\s\b\7\p\j\o\n\7\x\8\h\g\v\0\s\2\3\x\a\s\0\y\k\w\a\o\p\x\c\v\c\n\y\u\c\h\0\n\6\p\o\a\1\v\e\q\s\d\c\d\j\x\r\c\o\v\c\d\l\s\1\m\4\e\e\p\u\g\b\d\2\l\5\d\s\r\p\j\6\1\v\1\g\j ]] 00:19:35.562 21:32:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:35.562 21:32:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:19:35.820 [2024-07-11 21:32:56.519300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:35.820 [2024-07-11 21:32:56.519410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70714 ] 00:19:35.820 [2024-07-11 21:32:56.655573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.820 [2024-07-11 21:32:56.753317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.335  Copying: 512/512 [B] (average 250 kBps) 00:19:36.335 00:19:36.335 21:32:57 -- dd/posix.sh@93 -- # [[ k1dqaxn9r0vc3hmsohrdsdc9461jxv8gqa4xsdogzkio85uk771mll0ug4mynxln2u3xyzm0h615klebaht2jlravgz117jim8uwqejs2n8smd0vsavrx8aw6w0f1vkexjo8vxl678advlexta86kgirb3ggzu9w8to8hn9185lc2tqhvxzfgrdalaio9b1zyviq7m08ow74km5zxzs3xler47ksdjqyx1oe2aqbnvwswin35vbxfm8afvkvgr13mxr7pdbjcn16x5g98kdar11idhcd44m8wi9owtuapiraqje1ghehkiexs0vjejadn28kyke92y28ukmf55hb1dko9wi8brp2ngt6xa93gjiifeusz0107ivqvmps0mg6yriml5js7pj2hrqo9dfuncswqp58bg1i4d2thshr0qussb7pjon7x8hgv0s23xas0ykwaopxcvcnyuch0n6poa1veqsdcdjxrcovcdls1m4eepugbd2l5dsrpj61v1gj == \k\1\d\q\a\x\n\9\r\0\v\c\3\h\m\s\o\h\r\d\s\d\c\9\4\6\1\j\x\v\8\g\q\a\4\x\s\d\o\g\z\k\i\o\8\5\u\k\7\7\1\m\l\l\0\u\g\4\m\y\n\x\l\n\2\u\3\x\y\z\m\0\h\6\1\5\k\l\e\b\a\h\t\2\j\l\r\a\v\g\z\1\1\7\j\i\m\8\u\w\q\e\j\s\2\n\8\s\m\d\0\v\s\a\v\r\x\8\a\w\6\w\0\f\1\v\k\e\x\j\o\8\v\x\l\6\7\8\a\d\v\l\e\x\t\a\8\6\k\g\i\r\b\3\g\g\z\u\9\w\8\t\o\8\h\n\9\1\8\5\l\c\2\t\q\h\v\x\z\f\g\r\d\a\l\a\i\o\9\b\1\z\y\v\i\q\7\m\0\8\o\w\7\4\k\m\5\z\x\z\s\3\x\l\e\r\4\7\k\s\d\j\q\y\x\1\o\e\2\a\q\b\n\v\w\s\w\i\n\3\5\v\b\x\f\m\8\a\f\v\k\v\g\r\1\3\m\x\r\7\p\d\b\j\c\n\1\6\x\5\g\9\8\k\d\a\r\1\1\i\d\h\c\d\4\4\m\8\w\i\9\o\w\t\u\a\p\i\r\a\q\j\e\1\g\h\e\h\k\i\e\x\s\0\v\j\e\j\a\d\n\2\8\k\y\k\e\9\2\y\2\8\u\k\m\f\5\5\h\b\1\d\k\o\9\w\i\8\b\r\p\2\n\g\t\6\x\a\9\3\g\j\i\i\f\e\u\s\z\0\1\0\7\i\v\q\v\m\p\s\0\m\g\6\y\r\i\m\l\5\j\s\7\p\j\2\h\r\q\o\9\d\f\u\n\c\s\w\q\p\5\8\b\g\1\i\4\d\2\t\h\s\h\r\0\q\u\s\s\b\7\p\j\o\n\7\x\8\h\g\v\0\s\2\3\x\a\s\0\y\k\w\a\o\p\x\c\v\c\n\y\u\c\h\0\n\6\p\o\a\1\v\e\q\s\d\c\d\j\x\r\c\o\v\c\d\l\s\1\m\4\e\e\p\u\g\b\d\2\l\5\d\s\r\p\j\6\1\v\1\g\j ]] 00:19:36.335 21:32:57 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:19:36.335 21:32:57 -- dd/posix.sh@86 -- # gen_bytes 512 00:19:36.335 21:32:57 -- dd/common.sh@98 -- # xtrace_disable 00:19:36.335 21:32:57 -- common/autotest_common.sh@10 -- # set +x 00:19:36.335 21:32:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:36.335 21:32:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:19:36.335 [2024-07-11 21:32:57.149992] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:36.335 [2024-07-11 21:32:57.150114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:19:36.593 [2024-07-11 21:32:57.287594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.593 [2024-07-11 21:32:57.383938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.852  Copying: 512/512 [B] (average 500 kBps) 00:19:36.852 00:19:36.852 21:32:57 -- dd/posix.sh@93 -- # [[ dy9b2bkvkqod5f86m07qdtrr0j2autxnz0fqij1f5z72h9fk7aq04gx7gh1vrj7836cu8yoqj4xkkivoy2he8rzebgzwb898352us30kt7cbitg18c3sdr9jsnrf5izf8s8dj4xuzj8d2yxaojrw91k9yokm2h9l6pivu0gumdpld0olglt8102yun83xslllgp74r66r3dw9jblq3dzgdx996gmeinayleh20459bw3pyrr0p52ek21b3em1fh5xapu01xantszomfrhkx6nuvns661e9ogv2flhpiz5bns13cb7mmv521uxhdb9rszdbj7caawxtf078ydlkjmdcrcqyyp9n08aasksum5ri7uu6cyvtsqfloya1rar9ak78zylx8qq1qc3dah9ckizzkqonp8l0o7wfmsfkerbviax6v9sabh0qpzcs5egrv94ox89yw4dcg9pbhx95rpgef8cd1nputo393qs20m5yhsebm7acdg8lnylw9p5dsw == \d\y\9\b\2\b\k\v\k\q\o\d\5\f\8\6\m\0\7\q\d\t\r\r\0\j\2\a\u\t\x\n\z\0\f\q\i\j\1\f\5\z\7\2\h\9\f\k\7\a\q\0\4\g\x\7\g\h\1\v\r\j\7\8\3\6\c\u\8\y\o\q\j\4\x\k\k\i\v\o\y\2\h\e\8\r\z\e\b\g\z\w\b\8\9\8\3\5\2\u\s\3\0\k\t\7\c\b\i\t\g\1\8\c\3\s\d\r\9\j\s\n\r\f\5\i\z\f\8\s\8\d\j\4\x\u\z\j\8\d\2\y\x\a\o\j\r\w\9\1\k\9\y\o\k\m\2\h\9\l\6\p\i\v\u\0\g\u\m\d\p\l\d\0\o\l\g\l\t\8\1\0\2\y\u\n\8\3\x\s\l\l\l\g\p\7\4\r\6\6\r\3\d\w\9\j\b\l\q\3\d\z\g\d\x\9\9\6\g\m\e\i\n\a\y\l\e\h\2\0\4\5\9\b\w\3\p\y\r\r\0\p\5\2\e\k\2\1\b\3\e\m\1\f\h\5\x\a\p\u\0\1\x\a\n\t\s\z\o\m\f\r\h\k\x\6\n\u\v\n\s\6\6\1\e\9\o\g\v\2\f\l\h\p\i\z\5\b\n\s\1\3\c\b\7\m\m\v\5\2\1\u\x\h\d\b\9\r\s\z\d\b\j\7\c\a\a\w\x\t\f\0\7\8\y\d\l\k\j\m\d\c\r\c\q\y\y\p\9\n\0\8\a\a\s\k\s\u\m\5\r\i\7\u\u\6\c\y\v\t\s\q\f\l\o\y\a\1\r\a\r\9\a\k\7\8\z\y\l\x\8\q\q\1\q\c\3\d\a\h\9\c\k\i\z\z\k\q\o\n\p\8\l\0\o\7\w\f\m\s\f\k\e\r\b\v\i\a\x\6\v\9\s\a\b\h\0\q\p\z\c\s\5\e\g\r\v\9\4\o\x\8\9\y\w\4\d\c\g\9\p\b\h\x\9\5\r\p\g\e\f\8\c\d\1\n\p\u\t\o\3\9\3\q\s\2\0\m\5\y\h\s\e\b\m\7\a\c\d\g\8\l\n\y\l\w\9\p\5\d\s\w ]] 00:19:36.852 21:32:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:36.852 21:32:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:36.852 [2024-07-11 21:32:57.740167] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:36.852 [2024-07-11 21:32:57.740303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70729 ] 00:19:37.110 [2024-07-11 21:32:57.879614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.110 [2024-07-11 21:32:57.980520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.626  Copying: 512/512 [B] (average 500 kBps) 00:19:37.626 00:19:37.626 21:32:58 -- dd/posix.sh@93 -- # [[ dy9b2bkvkqod5f86m07qdtrr0j2autxnz0fqij1f5z72h9fk7aq04gx7gh1vrj7836cu8yoqj4xkkivoy2he8rzebgzwb898352us30kt7cbitg18c3sdr9jsnrf5izf8s8dj4xuzj8d2yxaojrw91k9yokm2h9l6pivu0gumdpld0olglt8102yun83xslllgp74r66r3dw9jblq3dzgdx996gmeinayleh20459bw3pyrr0p52ek21b3em1fh5xapu01xantszomfrhkx6nuvns661e9ogv2flhpiz5bns13cb7mmv521uxhdb9rszdbj7caawxtf078ydlkjmdcrcqyyp9n08aasksum5ri7uu6cyvtsqfloya1rar9ak78zylx8qq1qc3dah9ckizzkqonp8l0o7wfmsfkerbviax6v9sabh0qpzcs5egrv94ox89yw4dcg9pbhx95rpgef8cd1nputo393qs20m5yhsebm7acdg8lnylw9p5dsw == \d\y\9\b\2\b\k\v\k\q\o\d\5\f\8\6\m\0\7\q\d\t\r\r\0\j\2\a\u\t\x\n\z\0\f\q\i\j\1\f\5\z\7\2\h\9\f\k\7\a\q\0\4\g\x\7\g\h\1\v\r\j\7\8\3\6\c\u\8\y\o\q\j\4\x\k\k\i\v\o\y\2\h\e\8\r\z\e\b\g\z\w\b\8\9\8\3\5\2\u\s\3\0\k\t\7\c\b\i\t\g\1\8\c\3\s\d\r\9\j\s\n\r\f\5\i\z\f\8\s\8\d\j\4\x\u\z\j\8\d\2\y\x\a\o\j\r\w\9\1\k\9\y\o\k\m\2\h\9\l\6\p\i\v\u\0\g\u\m\d\p\l\d\0\o\l\g\l\t\8\1\0\2\y\u\n\8\3\x\s\l\l\l\g\p\7\4\r\6\6\r\3\d\w\9\j\b\l\q\3\d\z\g\d\x\9\9\6\g\m\e\i\n\a\y\l\e\h\2\0\4\5\9\b\w\3\p\y\r\r\0\p\5\2\e\k\2\1\b\3\e\m\1\f\h\5\x\a\p\u\0\1\x\a\n\t\s\z\o\m\f\r\h\k\x\6\n\u\v\n\s\6\6\1\e\9\o\g\v\2\f\l\h\p\i\z\5\b\n\s\1\3\c\b\7\m\m\v\5\2\1\u\x\h\d\b\9\r\s\z\d\b\j\7\c\a\a\w\x\t\f\0\7\8\y\d\l\k\j\m\d\c\r\c\q\y\y\p\9\n\0\8\a\a\s\k\s\u\m\5\r\i\7\u\u\6\c\y\v\t\s\q\f\l\o\y\a\1\r\a\r\9\a\k\7\8\z\y\l\x\8\q\q\1\q\c\3\d\a\h\9\c\k\i\z\z\k\q\o\n\p\8\l\0\o\7\w\f\m\s\f\k\e\r\b\v\i\a\x\6\v\9\s\a\b\h\0\q\p\z\c\s\5\e\g\r\v\9\4\o\x\8\9\y\w\4\d\c\g\9\p\b\h\x\9\5\r\p\g\e\f\8\c\d\1\n\p\u\t\o\3\9\3\q\s\2\0\m\5\y\h\s\e\b\m\7\a\c\d\g\8\l\n\y\l\w\9\p\5\d\s\w ]] 00:19:37.626 21:32:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:37.626 21:32:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:19:37.626 [2024-07-11 21:32:58.372457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:37.626 [2024-07-11 21:32:58.372594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70741 ] 00:19:37.626 [2024-07-11 21:32:58.509258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.884 [2024-07-11 21:32:58.606347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.143  Copying: 512/512 [B] (average 500 kBps) 00:19:38.143 00:19:38.143 21:32:58 -- dd/posix.sh@93 -- # [[ dy9b2bkvkqod5f86m07qdtrr0j2autxnz0fqij1f5z72h9fk7aq04gx7gh1vrj7836cu8yoqj4xkkivoy2he8rzebgzwb898352us30kt7cbitg18c3sdr9jsnrf5izf8s8dj4xuzj8d2yxaojrw91k9yokm2h9l6pivu0gumdpld0olglt8102yun83xslllgp74r66r3dw9jblq3dzgdx996gmeinayleh20459bw3pyrr0p52ek21b3em1fh5xapu01xantszomfrhkx6nuvns661e9ogv2flhpiz5bns13cb7mmv521uxhdb9rszdbj7caawxtf078ydlkjmdcrcqyyp9n08aasksum5ri7uu6cyvtsqfloya1rar9ak78zylx8qq1qc3dah9ckizzkqonp8l0o7wfmsfkerbviax6v9sabh0qpzcs5egrv94ox89yw4dcg9pbhx95rpgef8cd1nputo393qs20m5yhsebm7acdg8lnylw9p5dsw == \d\y\9\b\2\b\k\v\k\q\o\d\5\f\8\6\m\0\7\q\d\t\r\r\0\j\2\a\u\t\x\n\z\0\f\q\i\j\1\f\5\z\7\2\h\9\f\k\7\a\q\0\4\g\x\7\g\h\1\v\r\j\7\8\3\6\c\u\8\y\o\q\j\4\x\k\k\i\v\o\y\2\h\e\8\r\z\e\b\g\z\w\b\8\9\8\3\5\2\u\s\3\0\k\t\7\c\b\i\t\g\1\8\c\3\s\d\r\9\j\s\n\r\f\5\i\z\f\8\s\8\d\j\4\x\u\z\j\8\d\2\y\x\a\o\j\r\w\9\1\k\9\y\o\k\m\2\h\9\l\6\p\i\v\u\0\g\u\m\d\p\l\d\0\o\l\g\l\t\8\1\0\2\y\u\n\8\3\x\s\l\l\l\g\p\7\4\r\6\6\r\3\d\w\9\j\b\l\q\3\d\z\g\d\x\9\9\6\g\m\e\i\n\a\y\l\e\h\2\0\4\5\9\b\w\3\p\y\r\r\0\p\5\2\e\k\2\1\b\3\e\m\1\f\h\5\x\a\p\u\0\1\x\a\n\t\s\z\o\m\f\r\h\k\x\6\n\u\v\n\s\6\6\1\e\9\o\g\v\2\f\l\h\p\i\z\5\b\n\s\1\3\c\b\7\m\m\v\5\2\1\u\x\h\d\b\9\r\s\z\d\b\j\7\c\a\a\w\x\t\f\0\7\8\y\d\l\k\j\m\d\c\r\c\q\y\y\p\9\n\0\8\a\a\s\k\s\u\m\5\r\i\7\u\u\6\c\y\v\t\s\q\f\l\o\y\a\1\r\a\r\9\a\k\7\8\z\y\l\x\8\q\q\1\q\c\3\d\a\h\9\c\k\i\z\z\k\q\o\n\p\8\l\0\o\7\w\f\m\s\f\k\e\r\b\v\i\a\x\6\v\9\s\a\b\h\0\q\p\z\c\s\5\e\g\r\v\9\4\o\x\8\9\y\w\4\d\c\g\9\p\b\h\x\9\5\r\p\g\e\f\8\c\d\1\n\p\u\t\o\3\9\3\q\s\2\0\m\5\y\h\s\e\b\m\7\a\c\d\g\8\l\n\y\l\w\9\p\5\d\s\w ]] 00:19:38.143 21:32:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:38.143 21:32:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:19:38.143 [2024-07-11 21:32:58.961515] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:38.143 [2024-07-11 21:32:58.961634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70744 ] 00:19:38.401 [2024-07-11 21:32:59.099905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.401 [2024-07-11 21:32:59.197117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.660  Copying: 512/512 [B] (average 500 kBps) 00:19:38.660 00:19:38.660 ************************************ 00:19:38.660 END TEST dd_flags_misc_forced_aio 00:19:38.660 ************************************ 00:19:38.660 21:32:59 -- dd/posix.sh@93 -- # [[ dy9b2bkvkqod5f86m07qdtrr0j2autxnz0fqij1f5z72h9fk7aq04gx7gh1vrj7836cu8yoqj4xkkivoy2he8rzebgzwb898352us30kt7cbitg18c3sdr9jsnrf5izf8s8dj4xuzj8d2yxaojrw91k9yokm2h9l6pivu0gumdpld0olglt8102yun83xslllgp74r66r3dw9jblq3dzgdx996gmeinayleh20459bw3pyrr0p52ek21b3em1fh5xapu01xantszomfrhkx6nuvns661e9ogv2flhpiz5bns13cb7mmv521uxhdb9rszdbj7caawxtf078ydlkjmdcrcqyyp9n08aasksum5ri7uu6cyvtsqfloya1rar9ak78zylx8qq1qc3dah9ckizzkqonp8l0o7wfmsfkerbviax6v9sabh0qpzcs5egrv94ox89yw4dcg9pbhx95rpgef8cd1nputo393qs20m5yhsebm7acdg8lnylw9p5dsw == \d\y\9\b\2\b\k\v\k\q\o\d\5\f\8\6\m\0\7\q\d\t\r\r\0\j\2\a\u\t\x\n\z\0\f\q\i\j\1\f\5\z\7\2\h\9\f\k\7\a\q\0\4\g\x\7\g\h\1\v\r\j\7\8\3\6\c\u\8\y\o\q\j\4\x\k\k\i\v\o\y\2\h\e\8\r\z\e\b\g\z\w\b\8\9\8\3\5\2\u\s\3\0\k\t\7\c\b\i\t\g\1\8\c\3\s\d\r\9\j\s\n\r\f\5\i\z\f\8\s\8\d\j\4\x\u\z\j\8\d\2\y\x\a\o\j\r\w\9\1\k\9\y\o\k\m\2\h\9\l\6\p\i\v\u\0\g\u\m\d\p\l\d\0\o\l\g\l\t\8\1\0\2\y\u\n\8\3\x\s\l\l\l\g\p\7\4\r\6\6\r\3\d\w\9\j\b\l\q\3\d\z\g\d\x\9\9\6\g\m\e\i\n\a\y\l\e\h\2\0\4\5\9\b\w\3\p\y\r\r\0\p\5\2\e\k\2\1\b\3\e\m\1\f\h\5\x\a\p\u\0\1\x\a\n\t\s\z\o\m\f\r\h\k\x\6\n\u\v\n\s\6\6\1\e\9\o\g\v\2\f\l\h\p\i\z\5\b\n\s\1\3\c\b\7\m\m\v\5\2\1\u\x\h\d\b\9\r\s\z\d\b\j\7\c\a\a\w\x\t\f\0\7\8\y\d\l\k\j\m\d\c\r\c\q\y\y\p\9\n\0\8\a\a\s\k\s\u\m\5\r\i\7\u\u\6\c\y\v\t\s\q\f\l\o\y\a\1\r\a\r\9\a\k\7\8\z\y\l\x\8\q\q\1\q\c\3\d\a\h\9\c\k\i\z\z\k\q\o\n\p\8\l\0\o\7\w\f\m\s\f\k\e\r\b\v\i\a\x\6\v\9\s\a\b\h\0\q\p\z\c\s\5\e\g\r\v\9\4\o\x\8\9\y\w\4\d\c\g\9\p\b\h\x\9\5\r\p\g\e\f\8\c\d\1\n\p\u\t\o\3\9\3\q\s\2\0\m\5\y\h\s\e\b\m\7\a\c\d\g\8\l\n\y\l\w\9\p\5\d\s\w ]] 00:19:38.660 00:19:38.660 real 0m4.885s 00:19:38.660 user 0m2.672s 00:19:38.660 sys 0m1.217s 00:19:38.660 21:32:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.660 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.660 21:32:59 -- dd/posix.sh@1 -- # cleanup 00:19:38.660 21:32:59 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:38.660 21:32:59 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:38.660 ************************************ 00:19:38.660 END TEST spdk_dd_posix 00:19:38.660 ************************************ 00:19:38.660 00:19:38.660 real 0m22.210s 00:19:38.660 user 0m11.081s 00:19:38.660 sys 0m5.246s 00:19:38.660 21:32:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.660 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.660 21:32:59 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:19:38.660 21:32:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:38.660 21:32:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:38.660 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.660 ************************************ 00:19:38.660 START TEST spdk_dd_malloc 00:19:38.660 ************************************ 00:19:38.661 21:32:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:19:38.920 * Looking for test storage... 00:19:38.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:38.920 21:32:59 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.920 21:32:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.920 21:32:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.920 21:32:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.920 21:32:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.920 21:32:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.920 21:32:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.920 21:32:59 -- paths/export.sh@5 -- # export PATH 00:19:38.920 21:32:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.920 21:32:59 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:19:38.920 21:32:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:38.920 21:32:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:38.920 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.920 ************************************ 00:19:38.920 START TEST dd_malloc_copy 00:19:38.920 ************************************ 00:19:38.920 21:32:59 -- common/autotest_common.sh@1104 -- # malloc_copy 00:19:38.920 21:32:59 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:19:38.920 21:32:59 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:19:38.920 21:32:59 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:19:38.920 21:32:59 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:19:38.920 21:32:59 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:19:38.920 21:32:59 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:19:38.920 21:32:59 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:19:38.920 21:32:59 -- dd/malloc.sh@28 -- # gen_conf 00:19:38.920 21:32:59 -- dd/common.sh@31 -- # xtrace_disable 00:19:38.920 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:19:38.920 [2024-07-11 21:32:59.737286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:38.920 [2024-07-11 21:32:59.738010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:19:38.920 { 00:19:38.920 "subsystems": [ 00:19:38.920 { 00:19:38.920 "subsystem": "bdev", 00:19:38.920 "config": [ 00:19:38.920 { 00:19:38.920 "params": { 00:19:38.920 "block_size": 512, 00:19:38.920 "num_blocks": 1048576, 00:19:38.920 "name": "malloc0" 00:19:38.920 }, 00:19:38.920 "method": "bdev_malloc_create" 00:19:38.920 }, 00:19:38.920 { 00:19:38.920 "params": { 00:19:38.920 "block_size": 512, 00:19:38.920 "num_blocks": 1048576, 00:19:38.920 "name": "malloc1" 00:19:38.920 }, 00:19:38.920 "method": "bdev_malloc_create" 00:19:38.920 }, 00:19:38.920 { 00:19:38.920 "method": "bdev_wait_for_examine" 00:19:38.920 } 00:19:38.920 ] 00:19:38.920 } 00:19:38.920 ] 00:19:38.920 } 00:19:39.178 [2024-07-11 21:32:59.874063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.178 [2024-07-11 21:32:59.971100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.706  Copying: 196/512 [MB] (196 MBps) Copying: 396/512 [MB] (199 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:19:42.706 00:19:42.706 21:33:03 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:19:42.706 21:33:03 -- dd/malloc.sh@33 -- # gen_conf 00:19:42.706 21:33:03 -- dd/common.sh@31 -- # xtrace_disable 00:19:42.706 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:42.706 [2024-07-11 21:33:03.622782] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:42.706 [2024-07-11 21:33:03.622885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70865 ] 00:19:42.706 { 00:19:42.706 "subsystems": [ 00:19:42.706 { 00:19:42.706 "subsystem": "bdev", 00:19:42.706 "config": [ 00:19:42.706 { 00:19:42.706 "params": { 00:19:42.706 "block_size": 512, 00:19:42.706 "num_blocks": 1048576, 00:19:42.706 "name": "malloc0" 00:19:42.706 }, 00:19:42.706 "method": "bdev_malloc_create" 00:19:42.706 }, 00:19:42.706 { 00:19:42.706 "params": { 00:19:42.706 "block_size": 512, 00:19:42.706 "num_blocks": 1048576, 00:19:42.706 "name": "malloc1" 00:19:42.706 }, 00:19:42.706 "method": "bdev_malloc_create" 00:19:42.706 }, 00:19:42.706 { 00:19:42.706 "method": "bdev_wait_for_examine" 00:19:42.706 } 00:19:42.706 ] 00:19:42.706 } 00:19:42.706 ] 00:19:42.706 } 00:19:42.964 [2024-07-11 21:33:03.763941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.964 [2024-07-11 21:33:03.854247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.538  Copying: 198/512 [MB] (198 MBps) Copying: 398/512 [MB] (199 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:19:46.538 00:19:46.538 00:19:46.538 real 0m7.723s 00:19:46.538 user 0m6.651s 00:19:46.538 sys 0m0.885s 00:19:46.538 21:33:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.538 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:19:46.538 ************************************ 00:19:46.538 END TEST dd_malloc_copy 00:19:46.538 ************************************ 00:19:46.538 00:19:46.538 real 0m7.854s 00:19:46.538 user 0m6.703s 00:19:46.538 sys 0m0.964s 00:19:46.538 21:33:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.538 ************************************ 00:19:46.538 END TEST spdk_dd_malloc 00:19:46.538 ************************************ 00:19:46.538 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:19:46.796 21:33:07 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:19:46.796 21:33:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:46.796 21:33:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:46.796 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:19:46.796 ************************************ 00:19:46.796 START TEST spdk_dd_bdev_to_bdev 00:19:46.796 ************************************ 00:19:46.796 21:33:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:19:46.796 * Looking for test storage... 00:19:46.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:46.796 21:33:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.796 21:33:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.796 21:33:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.796 21:33:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.796 21:33:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.796 21:33:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.796 21:33:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.796 21:33:07 -- paths/export.sh@5 -- # export PATH 00:19:46.796 21:33:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:19:46.796 21:33:07 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:19:46.796 21:33:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:19:46.796 21:33:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:46.796 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:19:46.796 ************************************ 00:19:46.796 START TEST dd_inflate_file 00:19:46.796 ************************************ 00:19:46.796 21:33:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:19:46.796 [2024-07-11 21:33:07.643070] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:46.796 [2024-07-11 21:33:07.643165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70974 ] 00:19:47.053 [2024-07-11 21:33:07.777429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.053 [2024-07-11 21:33:07.866622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.310  Copying: 64/64 [MB] (average 1641 MBps) 00:19:47.310 00:19:47.310 00:19:47.310 real 0m0.608s 00:19:47.310 user 0m0.313s 00:19:47.310 sys 0m0.174s 00:19:47.310 21:33:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.310 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:19:47.310 ************************************ 00:19:47.310 END TEST dd_inflate_file 00:19:47.310 ************************************ 00:19:47.310 21:33:08 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:19:47.310 21:33:08 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:19:47.567 21:33:08 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:19:47.567 21:33:08 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:19:47.567 21:33:08 -- dd/common.sh@31 -- # xtrace_disable 00:19:47.568 21:33:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:47.568 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:19:47.568 21:33:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.568 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:19:47.568 ************************************ 00:19:47.568 START TEST dd_copy_to_out_bdev 00:19:47.568 ************************************ 00:19:47.568 21:33:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:19:47.568 [2024-07-11 21:33:08.320385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:47.568 [2024-07-11 21:33:08.320501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71011 ] 00:19:47.568 { 00:19:47.568 "subsystems": [ 00:19:47.568 { 00:19:47.568 "subsystem": "bdev", 00:19:47.568 "config": [ 00:19:47.568 { 00:19:47.568 "params": { 00:19:47.568 "trtype": "pcie", 00:19:47.568 "traddr": "0000:00:06.0", 00:19:47.568 "name": "Nvme0" 00:19:47.568 }, 00:19:47.568 "method": "bdev_nvme_attach_controller" 00:19:47.568 }, 00:19:47.568 { 00:19:47.568 "params": { 00:19:47.568 "trtype": "pcie", 00:19:47.568 "traddr": "0000:00:07.0", 00:19:47.568 "name": "Nvme1" 00:19:47.568 }, 00:19:47.568 "method": "bdev_nvme_attach_controller" 00:19:47.568 }, 00:19:47.568 { 00:19:47.568 "method": "bdev_wait_for_examine" 00:19:47.568 } 00:19:47.568 ] 00:19:47.568 } 00:19:47.568 ] 00:19:47.568 } 00:19:47.568 [2024-07-11 21:33:08.461524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.825 [2024-07-11 21:33:08.564801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.477  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:19:49.477 00:19:49.477 00:19:49.477 real 0m1.910s 00:19:49.477 user 0m1.616s 00:19:49.477 sys 0m0.229s 00:19:49.477 21:33:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.477 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:19:49.477 ************************************ 00:19:49.477 END TEST dd_copy_to_out_bdev 00:19:49.477 ************************************ 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:19:49.477 21:33:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:49.477 21:33:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.477 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:19:49.477 ************************************ 00:19:49.477 START TEST dd_offset_magic 00:19:49.477 ************************************ 00:19:49.477 21:33:10 -- common/autotest_common.sh@1104 -- # offset_magic 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:19:49.477 21:33:10 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:49.477 21:33:10 -- dd/common.sh@31 -- # xtrace_disable 00:19:49.477 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:19:49.477 [2024-07-11 21:33:10.277569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:49.477 [2024-07-11 21:33:10.277668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:19:49.477 { 00:19:49.477 "subsystems": [ 00:19:49.477 { 00:19:49.477 "subsystem": "bdev", 00:19:49.477 "config": [ 00:19:49.477 { 00:19:49.477 "params": { 00:19:49.477 "trtype": "pcie", 00:19:49.477 "traddr": "0000:00:06.0", 00:19:49.477 "name": "Nvme0" 00:19:49.477 }, 00:19:49.477 "method": "bdev_nvme_attach_controller" 00:19:49.477 }, 00:19:49.477 { 00:19:49.477 "params": { 00:19:49.477 "trtype": "pcie", 00:19:49.477 "traddr": "0000:00:07.0", 00:19:49.477 "name": "Nvme1" 00:19:49.477 }, 00:19:49.477 "method": "bdev_nvme_attach_controller" 00:19:49.477 }, 00:19:49.477 { 00:19:49.477 "method": "bdev_wait_for_examine" 00:19:49.477 } 00:19:49.477 ] 00:19:49.477 } 00:19:49.477 ] 00:19:49.477 } 00:19:49.477 [2024-07-11 21:33:10.419986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.735 [2024-07-11 21:33:10.512338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.251  Copying: 65/65 [MB] (average 1015 MBps) 00:19:50.251 00:19:50.251 21:33:11 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:50.251 21:33:11 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:19:50.251 21:33:11 -- dd/common.sh@31 -- # xtrace_disable 00:19:50.251 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:19:50.251 [2024-07-11 21:33:11.075900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:50.251 [2024-07-11 21:33:11.075996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:19:50.251 { 00:19:50.251 "subsystems": [ 00:19:50.251 { 00:19:50.251 "subsystem": "bdev", 00:19:50.251 "config": [ 00:19:50.251 { 00:19:50.251 "params": { 00:19:50.251 "trtype": "pcie", 00:19:50.251 "traddr": "0000:00:06.0", 00:19:50.251 "name": "Nvme0" 00:19:50.251 }, 00:19:50.251 "method": "bdev_nvme_attach_controller" 00:19:50.251 }, 00:19:50.251 { 00:19:50.251 "params": { 00:19:50.251 "trtype": "pcie", 00:19:50.251 "traddr": "0000:00:07.0", 00:19:50.251 "name": "Nvme1" 00:19:50.251 }, 00:19:50.251 "method": "bdev_nvme_attach_controller" 00:19:50.251 }, 00:19:50.251 { 00:19:50.251 "method": "bdev_wait_for_examine" 00:19:50.251 } 00:19:50.251 ] 00:19:50.251 } 00:19:50.251 ] 00:19:50.251 } 00:19:50.509 [2024-07-11 21:33:11.215732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.509 [2024-07-11 21:33:11.314067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.025  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:51.025 00:19:51.025 21:33:11 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:51.025 21:33:11 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:51.025 21:33:11 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:51.025 21:33:11 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:19:51.025 21:33:11 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:51.025 21:33:11 -- dd/common.sh@31 -- # xtrace_disable 00:19:51.025 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:19:51.025 [2024-07-11 21:33:11.814698] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:51.025 [2024-07-11 21:33:11.814806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71084 ] 00:19:51.025 { 00:19:51.025 "subsystems": [ 00:19:51.025 { 00:19:51.025 "subsystem": "bdev", 00:19:51.025 "config": [ 00:19:51.025 { 00:19:51.025 "params": { 00:19:51.025 "trtype": "pcie", 00:19:51.025 "traddr": "0000:00:06.0", 00:19:51.025 "name": "Nvme0" 00:19:51.025 }, 00:19:51.025 "method": "bdev_nvme_attach_controller" 00:19:51.025 }, 00:19:51.025 { 00:19:51.025 "params": { 00:19:51.025 "trtype": "pcie", 00:19:51.025 "traddr": "0000:00:07.0", 00:19:51.025 "name": "Nvme1" 00:19:51.025 }, 00:19:51.025 "method": "bdev_nvme_attach_controller" 00:19:51.025 }, 00:19:51.025 { 00:19:51.025 "method": "bdev_wait_for_examine" 00:19:51.025 } 00:19:51.025 ] 00:19:51.025 } 00:19:51.025 ] 00:19:51.025 } 00:19:51.025 [2024-07-11 21:33:11.950146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.283 [2024-07-11 21:33:12.051279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.799  Copying: 65/65 [MB] (average 1101 MBps) 00:19:51.799 00:19:51.799 21:33:12 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:51.799 21:33:12 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:19:51.799 21:33:12 -- dd/common.sh@31 -- # xtrace_disable 00:19:51.799 21:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:51.799 [2024-07-11 21:33:12.630299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:51.799 [2024-07-11 21:33:12.630403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71104 ] 00:19:51.799 { 00:19:51.799 "subsystems": [ 00:19:51.799 { 00:19:51.799 "subsystem": "bdev", 00:19:51.799 "config": [ 00:19:51.799 { 00:19:51.799 "params": { 00:19:51.799 "trtype": "pcie", 00:19:51.799 "traddr": "0000:00:06.0", 00:19:51.799 "name": "Nvme0" 00:19:51.799 }, 00:19:51.799 "method": "bdev_nvme_attach_controller" 00:19:51.799 }, 00:19:51.799 { 00:19:51.799 "params": { 00:19:51.799 "trtype": "pcie", 00:19:51.799 "traddr": "0000:00:07.0", 00:19:51.799 "name": "Nvme1" 00:19:51.799 }, 00:19:51.799 "method": "bdev_nvme_attach_controller" 00:19:51.799 }, 00:19:51.799 { 00:19:51.799 "method": "bdev_wait_for_examine" 00:19:51.799 } 00:19:51.799 ] 00:19:51.799 } 00:19:51.799 ] 00:19:51.799 } 00:19:52.058 [2024-07-11 21:33:12.769418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.058 [2024-07-11 21:33:12.852587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.593  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:52.593 00:19:52.593 21:33:13 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:52.593 ************************************ 00:19:52.593 END TEST dd_offset_magic 00:19:52.593 ************************************ 00:19:52.593 21:33:13 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:52.593 00:19:52.593 real 0m3.071s 00:19:52.593 user 0m2.177s 00:19:52.593 sys 0m0.677s 00:19:52.593 21:33:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.593 21:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:52.593 21:33:13 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:19:52.593 21:33:13 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:19:52.593 21:33:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:52.593 21:33:13 -- dd/common.sh@11 -- # local nvme_ref= 00:19:52.593 21:33:13 -- dd/common.sh@12 -- # local size=4194330 00:19:52.593 21:33:13 -- dd/common.sh@14 -- # local bs=1048576 00:19:52.593 21:33:13 -- dd/common.sh@15 -- # local count=5 00:19:52.593 21:33:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:19:52.593 21:33:13 -- dd/common.sh@18 -- # gen_conf 00:19:52.593 21:33:13 -- dd/common.sh@31 -- # xtrace_disable 00:19:52.593 21:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:52.593 [2024-07-11 21:33:13.382403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:52.593 [2024-07-11 21:33:13.382510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71139 ] 00:19:52.593 { 00:19:52.593 "subsystems": [ 00:19:52.593 { 00:19:52.593 "subsystem": "bdev", 00:19:52.593 "config": [ 00:19:52.593 { 00:19:52.593 "params": { 00:19:52.593 "trtype": "pcie", 00:19:52.593 "traddr": "0000:00:06.0", 00:19:52.593 "name": "Nvme0" 00:19:52.593 }, 00:19:52.593 "method": "bdev_nvme_attach_controller" 00:19:52.593 }, 00:19:52.593 { 00:19:52.593 "params": { 00:19:52.593 "trtype": "pcie", 00:19:52.593 "traddr": "0000:00:07.0", 00:19:52.593 "name": "Nvme1" 00:19:52.593 }, 00:19:52.593 "method": "bdev_nvme_attach_controller" 00:19:52.593 }, 00:19:52.593 { 00:19:52.593 "method": "bdev_wait_for_examine" 00:19:52.593 } 00:19:52.593 ] 00:19:52.593 } 00:19:52.593 ] 00:19:52.593 } 00:19:52.593 [2024-07-11 21:33:13.523860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.850 [2024-07-11 21:33:13.605619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.108  Copying: 5120/5120 [kB] (average 1250 MBps) 00:19:53.108 00:19:53.108 21:33:14 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:19:53.108 21:33:14 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:19:53.108 21:33:14 -- dd/common.sh@11 -- # local nvme_ref= 00:19:53.108 21:33:14 -- dd/common.sh@12 -- # local size=4194330 00:19:53.108 21:33:14 -- dd/common.sh@14 -- # local bs=1048576 00:19:53.108 21:33:14 -- dd/common.sh@15 -- # local count=5 00:19:53.108 21:33:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:19:53.108 21:33:14 -- dd/common.sh@18 -- # gen_conf 00:19:53.108 21:33:14 -- dd/common.sh@31 -- # xtrace_disable 00:19:53.108 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:53.366 [2024-07-11 21:33:14.090205] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:53.366 [2024-07-11 21:33:14.090306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71148 ] 00:19:53.366 { 00:19:53.366 "subsystems": [ 00:19:53.366 { 00:19:53.366 "subsystem": "bdev", 00:19:53.366 "config": [ 00:19:53.366 { 00:19:53.366 "params": { 00:19:53.366 "trtype": "pcie", 00:19:53.366 "traddr": "0000:00:06.0", 00:19:53.366 "name": "Nvme0" 00:19:53.367 }, 00:19:53.367 "method": "bdev_nvme_attach_controller" 00:19:53.367 }, 00:19:53.367 { 00:19:53.367 "params": { 00:19:53.367 "trtype": "pcie", 00:19:53.367 "traddr": "0000:00:07.0", 00:19:53.367 "name": "Nvme1" 00:19:53.367 }, 00:19:53.367 "method": "bdev_nvme_attach_controller" 00:19:53.367 }, 00:19:53.367 { 00:19:53.367 "method": "bdev_wait_for_examine" 00:19:53.367 } 00:19:53.367 ] 00:19:53.367 } 00:19:53.367 ] 00:19:53.367 } 00:19:53.367 [2024-07-11 21:33:14.227894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.367 [2024-07-11 21:33:14.307614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.883  Copying: 5120/5120 [kB] (average 1000 MBps) 00:19:53.883 00:19:53.883 21:33:14 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:19:53.883 ************************************ 00:19:53.883 END TEST spdk_dd_bdev_to_bdev 00:19:53.883 ************************************ 00:19:53.883 00:19:53.883 real 0m7.264s 00:19:53.883 user 0m5.149s 00:19:53.883 sys 0m1.595s 00:19:53.883 21:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.883 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:53.883 21:33:14 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:19:53.883 21:33:14 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:19:53.883 21:33:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:53.883 21:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.883 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:53.883 ************************************ 00:19:53.883 START TEST spdk_dd_uring 00:19:53.883 ************************************ 00:19:53.883 21:33:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:19:54.141 * Looking for test storage... 00:19:54.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:54.141 21:33:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:54.141 21:33:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.141 21:33:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.141 21:33:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.141 21:33:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.141 21:33:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.141 21:33:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.141 21:33:14 -- paths/export.sh@5 -- # export PATH 00:19:54.141 21:33:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.141 21:33:14 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:19:54.141 21:33:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:54.141 21:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:54.141 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:54.141 ************************************ 00:19:54.141 START TEST dd_uring_copy 00:19:54.141 ************************************ 00:19:54.141 21:33:14 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:19:54.141 21:33:14 -- dd/uring.sh@15 -- # local zram_dev_id 00:19:54.141 21:33:14 -- dd/uring.sh@16 -- # local magic 00:19:54.141 21:33:14 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:19:54.141 21:33:14 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:19:54.141 21:33:14 -- dd/uring.sh@19 -- # local verify_magic 00:19:54.141 21:33:14 -- dd/uring.sh@21 -- # init_zram 00:19:54.141 21:33:14 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:19:54.141 21:33:14 -- dd/common.sh@164 -- # return 00:19:54.141 21:33:14 -- dd/uring.sh@22 -- # create_zram_dev 00:19:54.141 21:33:14 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:19:54.141 21:33:14 -- dd/uring.sh@22 -- # zram_dev_id=1 00:19:54.141 21:33:14 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:19:54.141 21:33:14 -- dd/common.sh@181 -- # local id=1 00:19:54.141 21:33:14 -- dd/common.sh@182 -- # local size=512M 00:19:54.141 21:33:14 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:19:54.141 21:33:14 -- dd/common.sh@186 -- # echo 512M 00:19:54.141 21:33:14 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:19:54.141 21:33:14 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:19:54.141 21:33:14 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:19:54.141 21:33:14 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:19:54.141 21:33:14 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:19:54.141 21:33:14 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:19:54.141 21:33:14 -- dd/uring.sh@41 -- # gen_bytes 1024 00:19:54.141 21:33:14 -- dd/common.sh@98 -- # xtrace_disable 00:19:54.141 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:54.141 21:33:14 -- dd/uring.sh@41 -- # magic=92bc2xrcnunfxvw64ekh0cba01d8rm384wz9h7kejbu7jouuwkpll65a0g6d2uoejokgz2vppi76bp8bmp41r8ve1kv9xme0bf7c9wdox5v4mk8xezkrd6unxxq81f49j91azqcxgn705gxpn9rrm76w1ubdbq3dnac7q6sa7egqs39xfoex1mknm4o7f4bjbk90yi69asldann179yows9qqighwfsqf9pjxi42e0gv6brkmg11mu2s8il3lnpup550fa1c129lnu5o04c3d7c20dwb6303t03ffsqxg2y3uy76vy8tf6frsk3cic1yd4wcqjjr590annl91whw7tibft7ixwn817jjjnguqg38b9oj8atu458h4bmaaitu6bqx30wsb0ukyopqdh0jju9sj1vxb6s5h3txsnv6ysgm60okem6cpce595xxoqkbhylwo3wf40wdrqd9acvsna762xw14dxxy2zgtnke9jdxat7pkmpxv0vax5o6z2oryg8wxosa8st2gjjk0b4wtpsh2c5ohav8jc2s0f5s7qdfr3z9iw9p294fdmifgdgmi2ocsb9iyi6nnk72qj1pn3ndo2s8wwkbhv6aqiu8iynwabo5vydg4dbswxwjsiorkzq3ncfcv2ubeimcguxf3qihw4mnxq24sizrnj6mpt5lrszlhldr6hbpz984iq9y4epmuwzdgmrdye4fiyb2tdq9a3vv5sb0p1460lyryx69rvxn9v61fer508ouv3f5t41dhlo87ix6tfbpaemg81amkv9pi0mg06gjvh2jihnsm91xw4sl5kxdzs8sybad68viymybryqkqvtrshjflhw4ovvpzcmwxq5vhb05fttvf189jhqj13bw68jskkf2hvvumkex6glfhrizerwkwbklbttt2ekkcxtp0lr2x4toh2kjynmuh8ycb9jkvre6hdku0oqtldxvtvscsirg4kr3exu36jzktr3wxbiyo8lnlkdqc1m0v46q5dwq8787 00:19:54.142 21:33:14 -- dd/uring.sh@42 -- # echo 92bc2xrcnunfxvw64ekh0cba01d8rm384wz9h7kejbu7jouuwkpll65a0g6d2uoejokgz2vppi76bp8bmp41r8ve1kv9xme0bf7c9wdox5v4mk8xezkrd6unxxq81f49j91azqcxgn705gxpn9rrm76w1ubdbq3dnac7q6sa7egqs39xfoex1mknm4o7f4bjbk90yi69asldann179yows9qqighwfsqf9pjxi42e0gv6brkmg11mu2s8il3lnpup550fa1c129lnu5o04c3d7c20dwb6303t03ffsqxg2y3uy76vy8tf6frsk3cic1yd4wcqjjr590annl91whw7tibft7ixwn817jjjnguqg38b9oj8atu458h4bmaaitu6bqx30wsb0ukyopqdh0jju9sj1vxb6s5h3txsnv6ysgm60okem6cpce595xxoqkbhylwo3wf40wdrqd9acvsna762xw14dxxy2zgtnke9jdxat7pkmpxv0vax5o6z2oryg8wxosa8st2gjjk0b4wtpsh2c5ohav8jc2s0f5s7qdfr3z9iw9p294fdmifgdgmi2ocsb9iyi6nnk72qj1pn3ndo2s8wwkbhv6aqiu8iynwabo5vydg4dbswxwjsiorkzq3ncfcv2ubeimcguxf3qihw4mnxq24sizrnj6mpt5lrszlhldr6hbpz984iq9y4epmuwzdgmrdye4fiyb2tdq9a3vv5sb0p1460lyryx69rvxn9v61fer508ouv3f5t41dhlo87ix6tfbpaemg81amkv9pi0mg06gjvh2jihnsm91xw4sl5kxdzs8sybad68viymybryqkqvtrshjflhw4ovvpzcmwxq5vhb05fttvf189jhqj13bw68jskkf2hvvumkex6glfhrizerwkwbklbttt2ekkcxtp0lr2x4toh2kjynmuh8ycb9jkvre6hdku0oqtldxvtvscsirg4kr3exu36jzktr3wxbiyo8lnlkdqc1m0v46q5dwq8787 00:19:54.142 21:33:14 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:19:54.142 [2024-07-11 21:33:14.995017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:54.142 [2024-07-11 21:33:14.995773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71216 ] 00:19:54.399 [2024-07-11 21:33:15.136271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.399 [2024-07-11 21:33:15.213987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.531  Copying: 511/511 [MB] (average 1276 MBps) 00:19:55.531 00:19:55.531 21:33:16 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:19:55.531 21:33:16 -- dd/uring.sh@54 -- # gen_conf 00:19:55.531 21:33:16 -- dd/common.sh@31 -- # xtrace_disable 00:19:55.531 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:55.531 [2024-07-11 21:33:16.276014] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:55.531 [2024-07-11 21:33:16.276099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71230 ] 00:19:55.531 { 00:19:55.531 "subsystems": [ 00:19:55.532 { 00:19:55.532 "subsystem": "bdev", 00:19:55.532 "config": [ 00:19:55.532 { 00:19:55.532 "params": { 00:19:55.532 "block_size": 512, 00:19:55.532 "num_blocks": 1048576, 00:19:55.532 "name": "malloc0" 00:19:55.532 }, 00:19:55.532 "method": "bdev_malloc_create" 00:19:55.532 }, 00:19:55.532 { 00:19:55.532 "params": { 00:19:55.532 "filename": "/dev/zram1", 00:19:55.532 "name": "uring0" 00:19:55.532 }, 00:19:55.532 "method": "bdev_uring_create" 00:19:55.532 }, 00:19:55.532 { 00:19:55.532 "method": "bdev_wait_for_examine" 00:19:55.532 } 00:19:55.532 ] 00:19:55.532 } 00:19:55.532 ] 00:19:55.532 } 00:19:55.532 [2024-07-11 21:33:16.410680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.790 [2024-07-11 21:33:16.494929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.919  Copying: 202/512 [MB] (202 MBps) Copying: 406/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:19:58.919 00:19:58.919 21:33:19 -- dd/uring.sh@60 -- # gen_conf 00:19:58.919 21:33:19 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:19:58.919 21:33:19 -- dd/common.sh@31 -- # xtrace_disable 00:19:58.919 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:19:58.919 [2024-07-11 21:33:19.701791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:58.919 [2024-07-11 21:33:19.702046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:19:58.919 { 00:19:58.919 "subsystems": [ 00:19:58.919 { 00:19:58.919 "subsystem": "bdev", 00:19:58.919 "config": [ 00:19:58.919 { 00:19:58.919 "params": { 00:19:58.919 "block_size": 512, 00:19:58.919 "num_blocks": 1048576, 00:19:58.919 "name": "malloc0" 00:19:58.919 }, 00:19:58.919 "method": "bdev_malloc_create" 00:19:58.919 }, 00:19:58.919 { 00:19:58.919 "params": { 00:19:58.919 "filename": "/dev/zram1", 00:19:58.919 "name": "uring0" 00:19:58.919 }, 00:19:58.919 "method": "bdev_uring_create" 00:19:58.919 }, 00:19:58.919 { 00:19:58.919 "method": "bdev_wait_for_examine" 00:19:58.919 } 00:19:58.919 ] 00:19:58.919 } 00:19:58.919 ] 00:19:58.919 } 00:19:58.919 [2024-07-11 21:33:19.840190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.177 [2024-07-11 21:33:19.923628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.587  Copying: 148/512 [MB] (148 MBps) Copying: 288/512 [MB] (140 MBps) Copying: 427/512 [MB] (138 MBps) Copying: 512/512 [MB] (average 138 MBps) 00:20:03.587 00:20:03.587 21:33:24 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:20:03.587 21:33:24 -- dd/uring.sh@66 -- # [[ 92bc2xrcnunfxvw64ekh0cba01d8rm384wz9h7kejbu7jouuwkpll65a0g6d2uoejokgz2vppi76bp8bmp41r8ve1kv9xme0bf7c9wdox5v4mk8xezkrd6unxxq81f49j91azqcxgn705gxpn9rrm76w1ubdbq3dnac7q6sa7egqs39xfoex1mknm4o7f4bjbk90yi69asldann179yows9qqighwfsqf9pjxi42e0gv6brkmg11mu2s8il3lnpup550fa1c129lnu5o04c3d7c20dwb6303t03ffsqxg2y3uy76vy8tf6frsk3cic1yd4wcqjjr590annl91whw7tibft7ixwn817jjjnguqg38b9oj8atu458h4bmaaitu6bqx30wsb0ukyopqdh0jju9sj1vxb6s5h3txsnv6ysgm60okem6cpce595xxoqkbhylwo3wf40wdrqd9acvsna762xw14dxxy2zgtnke9jdxat7pkmpxv0vax5o6z2oryg8wxosa8st2gjjk0b4wtpsh2c5ohav8jc2s0f5s7qdfr3z9iw9p294fdmifgdgmi2ocsb9iyi6nnk72qj1pn3ndo2s8wwkbhv6aqiu8iynwabo5vydg4dbswxwjsiorkzq3ncfcv2ubeimcguxf3qihw4mnxq24sizrnj6mpt5lrszlhldr6hbpz984iq9y4epmuwzdgmrdye4fiyb2tdq9a3vv5sb0p1460lyryx69rvxn9v61fer508ouv3f5t41dhlo87ix6tfbpaemg81amkv9pi0mg06gjvh2jihnsm91xw4sl5kxdzs8sybad68viymybryqkqvtrshjflhw4ovvpzcmwxq5vhb05fttvf189jhqj13bw68jskkf2hvvumkex6glfhrizerwkwbklbttt2ekkcxtp0lr2x4toh2kjynmuh8ycb9jkvre6hdku0oqtldxvtvscsirg4kr3exu36jzktr3wxbiyo8lnlkdqc1m0v46q5dwq8787 == \9\2\b\c\2\x\r\c\n\u\n\f\x\v\w\6\4\e\k\h\0\c\b\a\0\1\d\8\r\m\3\8\4\w\z\9\h\7\k\e\j\b\u\7\j\o\u\u\w\k\p\l\l\6\5\a\0\g\6\d\2\u\o\e\j\o\k\g\z\2\v\p\p\i\7\6\b\p\8\b\m\p\4\1\r\8\v\e\1\k\v\9\x\m\e\0\b\f\7\c\9\w\d\o\x\5\v\4\m\k\8\x\e\z\k\r\d\6\u\n\x\x\q\8\1\f\4\9\j\9\1\a\z\q\c\x\g\n\7\0\5\g\x\p\n\9\r\r\m\7\6\w\1\u\b\d\b\q\3\d\n\a\c\7\q\6\s\a\7\e\g\q\s\3\9\x\f\o\e\x\1\m\k\n\m\4\o\7\f\4\b\j\b\k\9\0\y\i\6\9\a\s\l\d\a\n\n\1\7\9\y\o\w\s\9\q\q\i\g\h\w\f\s\q\f\9\p\j\x\i\4\2\e\0\g\v\6\b\r\k\m\g\1\1\m\u\2\s\8\i\l\3\l\n\p\u\p\5\5\0\f\a\1\c\1\2\9\l\n\u\5\o\0\4\c\3\d\7\c\2\0\d\w\b\6\3\0\3\t\0\3\f\f\s\q\x\g\2\y\3\u\y\7\6\v\y\8\t\f\6\f\r\s\k\3\c\i\c\1\y\d\4\w\c\q\j\j\r\5\9\0\a\n\n\l\9\1\w\h\w\7\t\i\b\f\t\7\i\x\w\n\8\1\7\j\j\j\n\g\u\q\g\3\8\b\9\o\j\8\a\t\u\4\5\8\h\4\b\m\a\a\i\t\u\6\b\q\x\3\0\w\s\b\0\u\k\y\o\p\q\d\h\0\j\j\u\9\s\j\1\v\x\b\6\s\5\h\3\t\x\s\n\v\6\y\s\g\m\6\0\o\k\e\m\6\c\p\c\e\5\9\5\x\x\o\q\k\b\h\y\l\w\o\3\w\f\4\0\w\d\r\q\d\9\a\c\v\s\n\a\7\6\2\x\w\1\4\d\x\x\y\2\z\g\t\n\k\e\9\j\d\x\a\t\7\p\k\m\p\x\v\0\v\a\x\5\o\6\z\2\o\r\y\g\8\w\x\o\s\a\8\s\t\2\g\j\j\k\0\b\4\w\t\p\s\h\2\c\5\o\h\a\v\8\j\c\2\s\0\f\5\s\7\q\d\f\r\3\z\9\i\w\9\p\2\9\4\f\d\m\i\f\g\d\g\m\i\2\o\c\s\b\9\i\y\i\6\n\n\k\7\2\q\j\1\p\n\3\n\d\o\2\s\8\w\w\k\b\h\v\6\a\q\i\u\8\i\y\n\w\a\b\o\5\v\y\d\g\4\d\b\s\w\x\w\j\s\i\o\r\k\z\q\3\n\c\f\c\v\2\u\b\e\i\m\c\g\u\x\f\3\q\i\h\w\4\m\n\x\q\2\4\s\i\z\r\n\j\6\m\p\t\5\l\r\s\z\l\h\l\d\r\6\h\b\p\z\9\8\4\i\q\9\y\4\e\p\m\u\w\z\d\g\m\r\d\y\e\4\f\i\y\b\2\t\d\q\9\a\3\v\v\5\s\b\0\p\1\4\6\0\l\y\r\y\x\6\9\r\v\x\n\9\v\6\1\f\e\r\5\0\8\o\u\v\3\f\5\t\4\1\d\h\l\o\8\7\i\x\6\t\f\b\p\a\e\m\g\8\1\a\m\k\v\9\p\i\0\m\g\0\6\g\j\v\h\2\j\i\h\n\s\m\9\1\x\w\4\s\l\5\k\x\d\z\s\8\s\y\b\a\d\6\8\v\i\y\m\y\b\r\y\q\k\q\v\t\r\s\h\j\f\l\h\w\4\o\v\v\p\z\c\m\w\x\q\5\v\h\b\0\5\f\t\t\v\f\1\8\9\j\h\q\j\1\3\b\w\6\8\j\s\k\k\f\2\h\v\v\u\m\k\e\x\6\g\l\f\h\r\i\z\e\r\w\k\w\b\k\l\b\t\t\t\2\e\k\k\c\x\t\p\0\l\r\2\x\4\t\o\h\2\k\j\y\n\m\u\h\8\y\c\b\9\j\k\v\r\e\6\h\d\k\u\0\o\q\t\l\d\x\v\t\v\s\c\s\i\r\g\4\k\r\3\e\x\u\3\6\j\z\k\t\r\3\w\x\b\i\y\o\8\l\n\l\k\d\q\c\1\m\0\v\4\6\q\5\d\w\q\8\7\8\7 ]] 00:20:03.587 21:33:24 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:20:03.587 21:33:24 -- dd/uring.sh@69 -- # [[ 92bc2xrcnunfxvw64ekh0cba01d8rm384wz9h7kejbu7jouuwkpll65a0g6d2uoejokgz2vppi76bp8bmp41r8ve1kv9xme0bf7c9wdox5v4mk8xezkrd6unxxq81f49j91azqcxgn705gxpn9rrm76w1ubdbq3dnac7q6sa7egqs39xfoex1mknm4o7f4bjbk90yi69asldann179yows9qqighwfsqf9pjxi42e0gv6brkmg11mu2s8il3lnpup550fa1c129lnu5o04c3d7c20dwb6303t03ffsqxg2y3uy76vy8tf6frsk3cic1yd4wcqjjr590annl91whw7tibft7ixwn817jjjnguqg38b9oj8atu458h4bmaaitu6bqx30wsb0ukyopqdh0jju9sj1vxb6s5h3txsnv6ysgm60okem6cpce595xxoqkbhylwo3wf40wdrqd9acvsna762xw14dxxy2zgtnke9jdxat7pkmpxv0vax5o6z2oryg8wxosa8st2gjjk0b4wtpsh2c5ohav8jc2s0f5s7qdfr3z9iw9p294fdmifgdgmi2ocsb9iyi6nnk72qj1pn3ndo2s8wwkbhv6aqiu8iynwabo5vydg4dbswxwjsiorkzq3ncfcv2ubeimcguxf3qihw4mnxq24sizrnj6mpt5lrszlhldr6hbpz984iq9y4epmuwzdgmrdye4fiyb2tdq9a3vv5sb0p1460lyryx69rvxn9v61fer508ouv3f5t41dhlo87ix6tfbpaemg81amkv9pi0mg06gjvh2jihnsm91xw4sl5kxdzs8sybad68viymybryqkqvtrshjflhw4ovvpzcmwxq5vhb05fttvf189jhqj13bw68jskkf2hvvumkex6glfhrizerwkwbklbttt2ekkcxtp0lr2x4toh2kjynmuh8ycb9jkvre6hdku0oqtldxvtvscsirg4kr3exu36jzktr3wxbiyo8lnlkdqc1m0v46q5dwq8787 == \9\2\b\c\2\x\r\c\n\u\n\f\x\v\w\6\4\e\k\h\0\c\b\a\0\1\d\8\r\m\3\8\4\w\z\9\h\7\k\e\j\b\u\7\j\o\u\u\w\k\p\l\l\6\5\a\0\g\6\d\2\u\o\e\j\o\k\g\z\2\v\p\p\i\7\6\b\p\8\b\m\p\4\1\r\8\v\e\1\k\v\9\x\m\e\0\b\f\7\c\9\w\d\o\x\5\v\4\m\k\8\x\e\z\k\r\d\6\u\n\x\x\q\8\1\f\4\9\j\9\1\a\z\q\c\x\g\n\7\0\5\g\x\p\n\9\r\r\m\7\6\w\1\u\b\d\b\q\3\d\n\a\c\7\q\6\s\a\7\e\g\q\s\3\9\x\f\o\e\x\1\m\k\n\m\4\o\7\f\4\b\j\b\k\9\0\y\i\6\9\a\s\l\d\a\n\n\1\7\9\y\o\w\s\9\q\q\i\g\h\w\f\s\q\f\9\p\j\x\i\4\2\e\0\g\v\6\b\r\k\m\g\1\1\m\u\2\s\8\i\l\3\l\n\p\u\p\5\5\0\f\a\1\c\1\2\9\l\n\u\5\o\0\4\c\3\d\7\c\2\0\d\w\b\6\3\0\3\t\0\3\f\f\s\q\x\g\2\y\3\u\y\7\6\v\y\8\t\f\6\f\r\s\k\3\c\i\c\1\y\d\4\w\c\q\j\j\r\5\9\0\a\n\n\l\9\1\w\h\w\7\t\i\b\f\t\7\i\x\w\n\8\1\7\j\j\j\n\g\u\q\g\3\8\b\9\o\j\8\a\t\u\4\5\8\h\4\b\m\a\a\i\t\u\6\b\q\x\3\0\w\s\b\0\u\k\y\o\p\q\d\h\0\j\j\u\9\s\j\1\v\x\b\6\s\5\h\3\t\x\s\n\v\6\y\s\g\m\6\0\o\k\e\m\6\c\p\c\e\5\9\5\x\x\o\q\k\b\h\y\l\w\o\3\w\f\4\0\w\d\r\q\d\9\a\c\v\s\n\a\7\6\2\x\w\1\4\d\x\x\y\2\z\g\t\n\k\e\9\j\d\x\a\t\7\p\k\m\p\x\v\0\v\a\x\5\o\6\z\2\o\r\y\g\8\w\x\o\s\a\8\s\t\2\g\j\j\k\0\b\4\w\t\p\s\h\2\c\5\o\h\a\v\8\j\c\2\s\0\f\5\s\7\q\d\f\r\3\z\9\i\w\9\p\2\9\4\f\d\m\i\f\g\d\g\m\i\2\o\c\s\b\9\i\y\i\6\n\n\k\7\2\q\j\1\p\n\3\n\d\o\2\s\8\w\w\k\b\h\v\6\a\q\i\u\8\i\y\n\w\a\b\o\5\v\y\d\g\4\d\b\s\w\x\w\j\s\i\o\r\k\z\q\3\n\c\f\c\v\2\u\b\e\i\m\c\g\u\x\f\3\q\i\h\w\4\m\n\x\q\2\4\s\i\z\r\n\j\6\m\p\t\5\l\r\s\z\l\h\l\d\r\6\h\b\p\z\9\8\4\i\q\9\y\4\e\p\m\u\w\z\d\g\m\r\d\y\e\4\f\i\y\b\2\t\d\q\9\a\3\v\v\5\s\b\0\p\1\4\6\0\l\y\r\y\x\6\9\r\v\x\n\9\v\6\1\f\e\r\5\0\8\o\u\v\3\f\5\t\4\1\d\h\l\o\8\7\i\x\6\t\f\b\p\a\e\m\g\8\1\a\m\k\v\9\p\i\0\m\g\0\6\g\j\v\h\2\j\i\h\n\s\m\9\1\x\w\4\s\l\5\k\x\d\z\s\8\s\y\b\a\d\6\8\v\i\y\m\y\b\r\y\q\k\q\v\t\r\s\h\j\f\l\h\w\4\o\v\v\p\z\c\m\w\x\q\5\v\h\b\0\5\f\t\t\v\f\1\8\9\j\h\q\j\1\3\b\w\6\8\j\s\k\k\f\2\h\v\v\u\m\k\e\x\6\g\l\f\h\r\i\z\e\r\w\k\w\b\k\l\b\t\t\t\2\e\k\k\c\x\t\p\0\l\r\2\x\4\t\o\h\2\k\j\y\n\m\u\h\8\y\c\b\9\j\k\v\r\e\6\h\d\k\u\0\o\q\t\l\d\x\v\t\v\s\c\s\i\r\g\4\k\r\3\e\x\u\3\6\j\z\k\t\r\3\w\x\b\i\y\o\8\l\n\l\k\d\q\c\1\m\0\v\4\6\q\5\d\w\q\8\7\8\7 ]] 00:20:03.587 21:33:24 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:20:03.845 21:33:24 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:20:03.845 21:33:24 -- dd/uring.sh@75 -- # gen_conf 00:20:03.845 21:33:24 -- dd/common.sh@31 -- # xtrace_disable 00:20:03.845 21:33:24 -- common/autotest_common.sh@10 -- # set +x 00:20:04.102 [2024-07-11 21:33:24.809836] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:04.102 [2024-07-11 21:33:24.809945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71383 ] 00:20:04.102 { 00:20:04.102 "subsystems": [ 00:20:04.102 { 00:20:04.102 "subsystem": "bdev", 00:20:04.102 "config": [ 00:20:04.102 { 00:20:04.102 "params": { 00:20:04.102 "block_size": 512, 00:20:04.102 "num_blocks": 1048576, 00:20:04.102 "name": "malloc0" 00:20:04.102 }, 00:20:04.102 "method": "bdev_malloc_create" 00:20:04.102 }, 00:20:04.102 { 00:20:04.102 "params": { 00:20:04.102 "filename": "/dev/zram1", 00:20:04.102 "name": "uring0" 00:20:04.103 }, 00:20:04.103 "method": "bdev_uring_create" 00:20:04.103 }, 00:20:04.103 { 00:20:04.103 "method": "bdev_wait_for_examine" 00:20:04.103 } 00:20:04.103 ] 00:20:04.103 } 00:20:04.103 ] 00:20:04.103 } 00:20:04.103 [2024-07-11 21:33:24.951069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.103 [2024-07-11 21:33:25.041474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.541  Copying: 137/512 [MB] (137 MBps) Copying: 281/512 [MB] (144 MBps) Copying: 428/512 [MB] (146 MBps) Copying: 512/512 [MB] (average 143 MBps) 00:20:08.541 00:20:08.541 21:33:29 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:20:08.541 21:33:29 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:20:08.541 21:33:29 -- dd/uring.sh@87 -- # : 00:20:08.541 21:33:29 -- dd/uring.sh@87 -- # : 00:20:08.541 21:33:29 -- dd/uring.sh@87 -- # gen_conf 00:20:08.541 21:33:29 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:20:08.541 21:33:29 -- dd/common.sh@31 -- # xtrace_disable 00:20:08.541 21:33:29 -- common/autotest_common.sh@10 -- # set +x 00:20:08.541 [2024-07-11 21:33:29.339025] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:08.541 [2024-07-11 21:33:29.339157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71444 ] 00:20:08.541 { 00:20:08.541 "subsystems": [ 00:20:08.541 { 00:20:08.541 "subsystem": "bdev", 00:20:08.541 "config": [ 00:20:08.541 { 00:20:08.541 "params": { 00:20:08.541 "block_size": 512, 00:20:08.541 "num_blocks": 1048576, 00:20:08.541 "name": "malloc0" 00:20:08.541 }, 00:20:08.541 "method": "bdev_malloc_create" 00:20:08.541 }, 00:20:08.541 { 00:20:08.541 "params": { 00:20:08.541 "filename": "/dev/zram1", 00:20:08.541 "name": "uring0" 00:20:08.541 }, 00:20:08.541 "method": "bdev_uring_create" 00:20:08.541 }, 00:20:08.541 { 00:20:08.541 "params": { 00:20:08.541 "name": "uring0" 00:20:08.541 }, 00:20:08.541 "method": "bdev_uring_delete" 00:20:08.541 }, 00:20:08.541 { 00:20:08.541 "method": "bdev_wait_for_examine" 00:20:08.541 } 00:20:08.541 ] 00:20:08.541 } 00:20:08.541 ] 00:20:08.541 } 00:20:08.541 [2024-07-11 21:33:29.477944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.799 [2024-07-11 21:33:29.566455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.313  Copying: 0/0 [B] (average 0 Bps) 00:20:09.313 00:20:09.314 21:33:30 -- dd/uring.sh@94 -- # : 00:20:09.314 21:33:30 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:20:09.314 21:33:30 -- common/autotest_common.sh@640 -- # local es=0 00:20:09.314 21:33:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:20:09.314 21:33:30 -- dd/uring.sh@94 -- # gen_conf 00:20:09.314 21:33:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:09.314 21:33:30 -- dd/common.sh@31 -- # xtrace_disable 00:20:09.314 21:33:30 -- common/autotest_common.sh@10 -- # set +x 00:20:09.314 21:33:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.314 21:33:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:09.314 21:33:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.314 21:33:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:09.314 21:33:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.314 21:33:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:09.314 21:33:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:09.314 21:33:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:20:09.572 { 00:20:09.572 "subsystems": [ 00:20:09.572 { 00:20:09.572 "subsystem": "bdev", 00:20:09.572 "config": [ 00:20:09.572 { 00:20:09.572 "params": { 00:20:09.572 "block_size": 512, 00:20:09.572 "num_blocks": 1048576, 00:20:09.572 "name": "malloc0" 00:20:09.572 }, 00:20:09.572 "method": "bdev_malloc_create" 00:20:09.572 }, 00:20:09.572 { 00:20:09.572 "params": { 00:20:09.572 "filename": "/dev/zram1", 00:20:09.572 "name": "uring0" 00:20:09.572 }, 00:20:09.572 "method": "bdev_uring_create" 00:20:09.572 }, 00:20:09.572 { 00:20:09.572 "params": { 00:20:09.572 "name": "uring0" 00:20:09.572 }, 00:20:09.572 "method": "bdev_uring_delete" 00:20:09.572 }, 00:20:09.572 { 00:20:09.572 "method": "bdev_wait_for_examine" 00:20:09.572 } 00:20:09.572 ] 00:20:09.572 } 00:20:09.572 ] 00:20:09.572 } 00:20:09.572 [2024-07-11 21:33:30.312059] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:09.572 [2024-07-11 21:33:30.312162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71469 ] 00:20:09.572 [2024-07-11 21:33:30.452426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.830 [2024-07-11 21:33:30.541629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.088 [2024-07-11 21:33:30.794850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:20:10.088 [2024-07-11 21:33:30.794913] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:20:10.088 [2024-07-11 21:33:30.794926] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:20:10.088 [2024-07-11 21:33:30.794936] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.346 [2024-07-11 21:33:31.100157] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:10.347 21:33:31 -- common/autotest_common.sh@643 -- # es=237 00:20:10.347 21:33:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:10.347 21:33:31 -- common/autotest_common.sh@652 -- # es=109 00:20:10.347 21:33:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:10.347 21:33:31 -- common/autotest_common.sh@660 -- # es=1 00:20:10.347 21:33:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:10.347 21:33:31 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:20:10.347 21:33:31 -- dd/common.sh@172 -- # local id=1 00:20:10.347 21:33:31 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:20:10.347 21:33:31 -- dd/common.sh@176 -- # echo 1 00:20:10.347 21:33:31 -- dd/common.sh@177 -- # echo 1 00:20:10.347 21:33:31 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:20:10.605 00:20:10.605 real 0m16.531s 00:20:10.605 user 0m9.545s 00:20:10.605 sys 0m6.343s 00:20:10.605 21:33:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.605 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:20:10.605 ************************************ 00:20:10.605 END TEST dd_uring_copy 00:20:10.605 ************************************ 00:20:10.605 00:20:10.605 real 0m16.670s 00:20:10.605 user 0m9.593s 00:20:10.605 sys 0m6.430s 00:20:10.605 21:33:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.605 ************************************ 00:20:10.605 END TEST spdk_dd_uring 00:20:10.605 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:20:10.605 ************************************ 00:20:10.605 21:33:31 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:20:10.605 21:33:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:10.605 21:33:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:10.605 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:20:10.605 ************************************ 00:20:10.605 START TEST spdk_dd_sparse 00:20:10.605 ************************************ 00:20:10.605 21:33:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:20:10.864 * Looking for test storage... 00:20:10.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:20:10.864 21:33:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.864 21:33:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.864 21:33:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.864 21:33:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.864 21:33:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.864 21:33:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.864 21:33:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.864 21:33:31 -- paths/export.sh@5 -- # export PATH 00:20:10.864 21:33:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.864 21:33:31 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:20:10.864 21:33:31 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:20:10.864 21:33:31 -- dd/sparse.sh@110 -- # file1=file_zero1 00:20:10.864 21:33:31 -- dd/sparse.sh@111 -- # file2=file_zero2 00:20:10.864 21:33:31 -- dd/sparse.sh@112 -- # file3=file_zero3 00:20:10.864 21:33:31 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:20:10.864 21:33:31 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:20:10.864 21:33:31 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:20:10.864 21:33:31 -- dd/sparse.sh@118 -- # prepare 00:20:10.864 21:33:31 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:20:10.864 21:33:31 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:20:10.864 1+0 records in 00:20:10.864 1+0 records out 00:20:10.864 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00655445 s, 640 MB/s 00:20:10.864 21:33:31 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:20:10.864 1+0 records in 00:20:10.864 1+0 records out 00:20:10.864 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00693319 s, 605 MB/s 00:20:10.864 21:33:31 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:20:10.864 1+0 records in 00:20:10.864 1+0 records out 00:20:10.864 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00390632 s, 1.1 GB/s 00:20:10.864 21:33:31 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:20:10.864 21:33:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:10.864 21:33:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:10.864 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:20:10.864 ************************************ 00:20:10.864 START TEST dd_sparse_file_to_file 00:20:10.864 ************************************ 00:20:10.864 21:33:31 -- common/autotest_common.sh@1104 -- # file_to_file 00:20:10.864 21:33:31 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:20:10.864 21:33:31 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:20:10.864 21:33:31 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:20:10.864 21:33:31 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:20:10.864 21:33:31 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:20:10.864 21:33:31 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:20:10.864 21:33:31 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:20:10.864 21:33:31 -- dd/sparse.sh@41 -- # gen_conf 00:20:10.864 21:33:31 -- dd/common.sh@31 -- # xtrace_disable 00:20:10.864 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:20:10.864 [2024-07-11 21:33:31.719373] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:10.864 [2024-07-11 21:33:31.719470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71563 ] 00:20:10.864 { 00:20:10.864 "subsystems": [ 00:20:10.864 { 00:20:10.864 "subsystem": "bdev", 00:20:10.864 "config": [ 00:20:10.864 { 00:20:10.864 "params": { 00:20:10.864 "block_size": 4096, 00:20:10.864 "filename": "dd_sparse_aio_disk", 00:20:10.864 "name": "dd_aio" 00:20:10.864 }, 00:20:10.864 "method": "bdev_aio_create" 00:20:10.864 }, 00:20:10.864 { 00:20:10.864 "params": { 00:20:10.864 "lvs_name": "dd_lvstore", 00:20:10.864 "bdev_name": "dd_aio" 00:20:10.864 }, 00:20:10.864 "method": "bdev_lvol_create_lvstore" 00:20:10.864 }, 00:20:10.864 { 00:20:10.864 "method": "bdev_wait_for_examine" 00:20:10.864 } 00:20:10.864 ] 00:20:10.864 } 00:20:10.864 ] 00:20:10.864 } 00:20:11.122 [2024-07-11 21:33:31.857212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.122 [2024-07-11 21:33:31.938287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.380  Copying: 12/36 [MB] (average 1333 MBps) 00:20:11.380 00:20:11.380 21:33:32 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:20:11.638 21:33:32 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:20:11.638 21:33:32 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:20:11.638 21:33:32 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:20:11.638 21:33:32 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:20:11.638 21:33:32 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:20:11.638 21:33:32 -- dd/sparse.sh@52 -- # stat1_b=24576 00:20:11.638 21:33:32 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:20:11.638 21:33:32 -- dd/sparse.sh@53 -- # stat2_b=24576 00:20:11.638 21:33:32 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:20:11.638 00:20:11.638 real 0m0.680s 00:20:11.638 user 0m0.404s 00:20:11.638 sys 0m0.183s 00:20:11.638 21:33:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.638 ************************************ 00:20:11.638 END TEST dd_sparse_file_to_file 00:20:11.638 ************************************ 00:20:11.638 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:20:11.638 21:33:32 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:20:11.638 21:33:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:11.638 21:33:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:11.638 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:20:11.638 ************************************ 00:20:11.638 START TEST dd_sparse_file_to_bdev 00:20:11.638 ************************************ 00:20:11.638 21:33:32 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:20:11.638 21:33:32 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:20:11.638 21:33:32 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:20:11.638 21:33:32 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:20:11.638 21:33:32 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:20:11.638 21:33:32 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:20:11.638 21:33:32 -- dd/sparse.sh@73 -- # gen_conf 00:20:11.638 21:33:32 -- dd/common.sh@31 -- # xtrace_disable 00:20:11.638 21:33:32 -- common/autotest_common.sh@10 -- # set +x 00:20:11.638 { 00:20:11.638 "subsystems": [ 00:20:11.638 { 00:20:11.638 "subsystem": "bdev", 00:20:11.638 "config": [ 00:20:11.638 { 00:20:11.638 "params": { 00:20:11.638 "block_size": 4096, 00:20:11.638 "filename": "dd_sparse_aio_disk", 00:20:11.638 "name": "dd_aio" 00:20:11.638 }, 00:20:11.638 "method": "bdev_aio_create" 00:20:11.638 }, 00:20:11.638 { 00:20:11.638 "params": { 00:20:11.638 "lvs_name": "dd_lvstore", 00:20:11.638 "lvol_name": "dd_lvol", 00:20:11.638 "size": 37748736, 00:20:11.638 "thin_provision": true 00:20:11.638 }, 00:20:11.638 "method": "bdev_lvol_create" 00:20:11.638 }, 00:20:11.638 { 00:20:11.638 "method": "bdev_wait_for_examine" 00:20:11.638 } 00:20:11.638 ] 00:20:11.638 } 00:20:11.638 ] 00:20:11.638 } 00:20:11.638 [2024-07-11 21:33:32.467723] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:11.638 [2024-07-11 21:33:32.467827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:20:11.896 [2024-07-11 21:33:32.609946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.896 [2024-07-11 21:33:32.709117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.896 [2024-07-11 21:33:32.812816] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:20:12.154  Copying: 12/36 [MB] (average 521 MBps)[2024-07-11 21:33:32.855272] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:20:12.154 00:20:12.154 00:20:12.428 00:20:12.428 real 0m0.712s 00:20:12.428 user 0m0.431s 00:20:12.428 sys 0m0.193s 00:20:12.428 21:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.428 ************************************ 00:20:12.428 END TEST dd_sparse_file_to_bdev 00:20:12.428 ************************************ 00:20:12.428 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:20:12.428 21:33:33 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:20:12.428 21:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:12.428 21:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.428 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:20:12.428 ************************************ 00:20:12.428 START TEST dd_sparse_bdev_to_file 00:20:12.428 ************************************ 00:20:12.428 21:33:33 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:20:12.428 21:33:33 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:20:12.428 21:33:33 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:20:12.428 21:33:33 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:20:12.428 21:33:33 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:20:12.429 21:33:33 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:20:12.429 21:33:33 -- dd/sparse.sh@91 -- # gen_conf 00:20:12.429 21:33:33 -- dd/common.sh@31 -- # xtrace_disable 00:20:12.429 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:20:12.429 [2024-07-11 21:33:33.208261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:12.429 [2024-07-11 21:33:33.208358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71635 ] 00:20:12.429 { 00:20:12.429 "subsystems": [ 00:20:12.429 { 00:20:12.429 "subsystem": "bdev", 00:20:12.429 "config": [ 00:20:12.429 { 00:20:12.429 "params": { 00:20:12.429 "block_size": 4096, 00:20:12.429 "filename": "dd_sparse_aio_disk", 00:20:12.429 "name": "dd_aio" 00:20:12.429 }, 00:20:12.429 "method": "bdev_aio_create" 00:20:12.429 }, 00:20:12.429 { 00:20:12.429 "method": "bdev_wait_for_examine" 00:20:12.429 } 00:20:12.429 ] 00:20:12.429 } 00:20:12.429 ] 00:20:12.429 } 00:20:12.429 [2024-07-11 21:33:33.350397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.720 [2024-07-11 21:33:33.445528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.978  Copying: 12/36 [MB] (average 1200 MBps) 00:20:12.978 00:20:12.978 21:33:33 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:20:12.978 21:33:33 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:20:12.978 21:33:33 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:20:12.979 21:33:33 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:20:12.979 21:33:33 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:20:12.979 21:33:33 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:20:12.979 21:33:33 -- dd/sparse.sh@102 -- # stat2_b=24576 00:20:12.979 21:33:33 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:20:12.979 21:33:33 -- dd/sparse.sh@103 -- # stat3_b=24576 00:20:12.979 21:33:33 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:20:12.979 00:20:12.979 real 0m0.694s 00:20:12.979 user 0m0.405s 00:20:12.979 sys 0m0.204s 00:20:12.979 21:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.979 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:20:12.979 ************************************ 00:20:12.979 END TEST dd_sparse_bdev_to_file 00:20:12.979 ************************************ 00:20:12.979 21:33:33 -- dd/sparse.sh@1 -- # cleanup 00:20:12.979 21:33:33 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:20:12.979 21:33:33 -- dd/sparse.sh@12 -- # rm file_zero1 00:20:12.979 21:33:33 -- dd/sparse.sh@13 -- # rm file_zero2 00:20:12.979 21:33:33 -- dd/sparse.sh@14 -- # rm file_zero3 00:20:12.979 00:20:12.979 real 0m2.376s 00:20:12.979 user 0m1.329s 00:20:12.979 sys 0m0.776s 00:20:12.979 21:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.979 ************************************ 00:20:12.979 END TEST spdk_dd_sparse 00:20:12.979 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:20:12.979 ************************************ 00:20:13.236 21:33:33 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:20:13.236 21:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:13.236 21:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.236 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:20:13.236 ************************************ 00:20:13.236 START TEST spdk_dd_negative 00:20:13.236 ************************************ 00:20:13.236 21:33:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:20:13.236 * Looking for test storage... 00:20:13.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:20:13.236 21:33:34 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.236 21:33:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.236 21:33:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.236 21:33:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.237 21:33:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.237 21:33:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.237 21:33:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.237 21:33:34 -- paths/export.sh@5 -- # export PATH 00:20:13.237 21:33:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.237 21:33:34 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:13.237 21:33:34 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:13.237 21:33:34 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:13.237 21:33:34 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:20:13.237 21:33:34 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:20:13.237 21:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:13.237 21:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.237 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.237 ************************************ 00:20:13.237 START TEST dd_invalid_arguments 00:20:13.237 ************************************ 00:20:13.237 21:33:34 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:20:13.237 21:33:34 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:20:13.237 21:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:13.237 21:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:20:13.237 21:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.237 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.237 21:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.237 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.237 21:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.237 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.237 21:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.237 21:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:13.237 21:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:20:13.237 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:20:13.237 options: 00:20:13.237 -c, --config JSON config file (default none) 00:20:13.237 --json JSON config file (default none) 00:20:13.237 --json-ignore-init-errors 00:20:13.237 don't exit on invalid config entry 00:20:13.237 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:20:13.237 -g, --single-file-segments 00:20:13.237 force creating just one hugetlbfs file 00:20:13.237 -h, --help show this usage 00:20:13.237 -i, --shm-id shared memory ID (optional) 00:20:13.237 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:20:13.237 --lcores lcore to CPU mapping list. The list is in the format: 00:20:13.237 [<,lcores[@CPUs]>...] 00:20:13.237 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:20:13.237 Within the group, '-' is used for range separator, 00:20:13.237 ',' is used for single number separator. 00:20:13.237 '( )' can be omitted for single element group, 00:20:13.237 '@' can be omitted if cpus and lcores have the same value 00:20:13.237 -n, --mem-channels channel number of memory channels used for DPDK 00:20:13.237 -p, --main-core main (primary) core for DPDK 00:20:13.237 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:20:13.237 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:20:13.237 --disable-cpumask-locks Disable CPU core lock files. 00:20:13.237 --silence-noticelog disable notice level logging to stderr 00:20:13.237 --msg-mempool-size global message memory pool size in count (default: 262143) 00:20:13.237 -u, --no-pci disable PCI access 00:20:13.237 --wait-for-rpc wait for RPCs to initialize subsystems 00:20:13.237 --max-delay maximum reactor delay (in microseconds) 00:20:13.237 -B, --pci-blocked pci addr to block (can be used more than once) 00:20:13.237 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:20:13.237 -R, --huge-unlink unlink huge files after initialization 00:20:13.237 -v, --version print SPDK version 00:20:13.237 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:20:13.237 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:20:13.237 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:20:13.237 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:20:13.237 Tracepoints vary in size and can use more than one trace entry. 00:20:13.237 --rpcs-allowed comma-separated list of permitted RPCS 00:20:13.237 --env-context Opaque context for use of the env implementation 00:20:13.237 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:20:13.237 --no-huge run without using hugepages 00:20:13.237 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:20:13.237 -e, --tpoint-group [:] 00:20:13.237 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:20:13.237 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:20:13.237 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:20:13.237 [2024-07-11 21:33:34.127628] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:20:13.237 can be combined (e.g. thread,bdev:0x1). 00:20:13.237 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:20:13.237 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:20:13.237 [--------- DD Options ---------] 00:20:13.237 --if Input file. Must specify either --if or --ib. 00:20:13.237 --ib Input bdev. Must specifier either --if or --ib 00:20:13.237 --of Output file. Must specify either --of or --ob. 00:20:13.237 --ob Output bdev. Must specify either --of or --ob. 00:20:13.237 --iflag Input file flags. 00:20:13.237 --oflag Output file flags. 00:20:13.237 --bs I/O unit size (default: 4096) 00:20:13.237 --qd Queue depth (default: 2) 00:20:13.237 --count I/O unit count. The number of I/O units to copy. (default: all) 00:20:13.237 --skip Skip this many I/O units at start of input. (default: 0) 00:20:13.237 --seek Skip this many I/O units at start of output. (default: 0) 00:20:13.237 --aio Force usage of AIO. (by default io_uring is used if available) 00:20:13.237 --sparse Enable hole skipping in input target 00:20:13.237 Available iflag and oflag values: 00:20:13.237 append - append mode 00:20:13.237 direct - use direct I/O for data 00:20:13.237 directory - fail unless a directory 00:20:13.237 dsync - use synchronized I/O for data 00:20:13.237 noatime - do not update access time 00:20:13.237 noctty - do not assign controlling terminal from file 00:20:13.237 nofollow - do not follow symlinks 00:20:13.237 nonblock - use non-blocking I/O 00:20:13.237 sync - use synchronized I/O for data and metadata 00:20:13.237 21:33:34 -- common/autotest_common.sh@643 -- # es=2 00:20:13.237 21:33:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:13.237 21:33:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:13.237 21:33:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:13.237 00:20:13.237 real 0m0.078s 00:20:13.237 user 0m0.040s 00:20:13.237 sys 0m0.036s 00:20:13.237 21:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.237 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.237 ************************************ 00:20:13.237 END TEST dd_invalid_arguments 00:20:13.237 ************************************ 00:20:13.496 21:33:34 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:20:13.496 21:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:13.496 21:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.496 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.496 ************************************ 00:20:13.496 START TEST dd_double_input 00:20:13.496 ************************************ 00:20:13.496 21:33:34 -- common/autotest_common.sh@1104 -- # double_input 00:20:13.496 21:33:34 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:20:13.496 21:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:13.496 21:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:20:13.496 21:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.496 21:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.496 21:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:13.496 21:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:20:13.496 [2024-07-11 21:33:34.252243] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:20:13.496 21:33:34 -- common/autotest_common.sh@643 -- # es=22 00:20:13.496 21:33:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:13.496 21:33:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:13.496 21:33:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:13.496 00:20:13.496 real 0m0.069s 00:20:13.496 user 0m0.045s 00:20:13.496 sys 0m0.024s 00:20:13.496 21:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.496 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.496 ************************************ 00:20:13.496 END TEST dd_double_input 00:20:13.496 ************************************ 00:20:13.496 21:33:34 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:20:13.496 21:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:13.496 21:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.496 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.496 ************************************ 00:20:13.496 START TEST dd_double_output 00:20:13.496 ************************************ 00:20:13.496 21:33:34 -- common/autotest_common.sh@1104 -- # double_output 00:20:13.496 21:33:34 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:20:13.496 21:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:13.496 21:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:20:13.496 21:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.496 21:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.496 21:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.496 21:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:13.496 21:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:20:13.497 [2024-07-11 21:33:34.377682] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:20:13.497 21:33:34 -- common/autotest_common.sh@643 -- # es=22 00:20:13.497 21:33:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:13.497 21:33:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:13.497 21:33:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:13.497 00:20:13.497 real 0m0.082s 00:20:13.497 user 0m0.054s 00:20:13.497 sys 0m0.027s 00:20:13.497 21:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.497 ************************************ 00:20:13.497 END TEST dd_double_output 00:20:13.497 ************************************ 00:20:13.497 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.497 21:33:34 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:20:13.497 21:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:13.497 21:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.497 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.755 ************************************ 00:20:13.755 START TEST dd_no_input 00:20:13.755 ************************************ 00:20:13.755 21:33:34 -- common/autotest_common.sh@1104 -- # no_input 00:20:13.755 21:33:34 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:20:13.755 21:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:13.755 21:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:20:13.755 21:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.755 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.755 21:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:13.756 21:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:20:13.756 [2024-07-11 21:33:34.493794] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:20:13.756 ************************************ 00:20:13.756 END TEST dd_no_input 00:20:13.756 ************************************ 00:20:13.756 21:33:34 -- common/autotest_common.sh@643 -- # es=22 00:20:13.756 21:33:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:13.756 21:33:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:13.756 21:33:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:13.756 00:20:13.756 real 0m0.061s 00:20:13.756 user 0m0.033s 00:20:13.756 sys 0m0.027s 00:20:13.756 21:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.756 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.756 21:33:34 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:20:13.756 21:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:13.756 21:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.756 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.756 ************************************ 00:20:13.756 START TEST dd_no_output 00:20:13.756 ************************************ 00:20:13.756 21:33:34 -- common/autotest_common.sh@1104 -- # no_output 00:20:13.756 21:33:34 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:13.756 21:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:13.756 21:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:13.756 21:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:13.756 21:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:20:13.756 [2024-07-11 21:33:34.601498] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:20:13.756 21:33:34 -- common/autotest_common.sh@643 -- # es=22 00:20:13.756 21:33:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:13.756 21:33:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:13.756 ************************************ 00:20:13.756 END TEST dd_no_output 00:20:13.756 ************************************ 00:20:13.756 21:33:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:13.756 00:20:13.756 real 0m0.060s 00:20:13.756 user 0m0.034s 00:20:13.756 sys 0m0.026s 00:20:13.756 21:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.756 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.756 21:33:34 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:20:13.756 21:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:13.756 21:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.756 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:13.756 ************************************ 00:20:13.756 START TEST dd_wrong_blocksize 00:20:13.756 ************************************ 00:20:13.756 21:33:34 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:20:13.756 21:33:34 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:20:13.756 21:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:13.756 21:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:20:13.756 21:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:13.756 21:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:13.756 21:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:20:14.015 [2024-07-11 21:33:34.743388] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:20:14.015 21:33:34 -- common/autotest_common.sh@643 -- # es=22 00:20:14.015 21:33:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:14.015 21:33:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:14.015 21:33:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:14.015 00:20:14.015 real 0m0.102s 00:20:14.015 user 0m0.074s 00:20:14.015 sys 0m0.026s 00:20:14.015 21:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.015 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:14.015 ************************************ 00:20:14.015 END TEST dd_wrong_blocksize 00:20:14.015 ************************************ 00:20:14.015 21:33:34 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:20:14.015 21:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:14.015 21:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:14.015 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:20:14.015 ************************************ 00:20:14.015 START TEST dd_smaller_blocksize 00:20:14.015 ************************************ 00:20:14.015 21:33:34 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:20:14.015 21:33:34 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:20:14.015 21:33:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:14.015 21:33:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:20:14.015 21:33:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.015 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.015 21:33:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.015 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.015 21:33:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.015 21:33:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.015 21:33:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.015 21:33:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:14.015 21:33:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:20:14.015 [2024-07-11 21:33:34.870462] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:14.015 [2024-07-11 21:33:34.870650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71856 ] 00:20:14.273 [2024-07-11 21:33:35.009866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.273 [2024-07-11 21:33:35.109363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.273 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:20:14.273 [2024-07-11 21:33:35.195714] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:20:14.273 [2024-07-11 21:33:35.195746] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:14.532 [2024-07-11 21:33:35.306134] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:14.532 21:33:35 -- common/autotest_common.sh@643 -- # es=244 00:20:14.532 21:33:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:14.532 21:33:35 -- common/autotest_common.sh@652 -- # es=116 00:20:14.532 21:33:35 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:14.532 21:33:35 -- common/autotest_common.sh@660 -- # es=1 00:20:14.532 21:33:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:14.532 00:20:14.532 real 0m0.580s 00:20:14.532 user 0m0.323s 00:20:14.532 sys 0m0.152s 00:20:14.532 21:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.532 ************************************ 00:20:14.532 END TEST dd_smaller_blocksize 00:20:14.532 ************************************ 00:20:14.532 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:14.532 21:33:35 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:20:14.532 21:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:14.532 21:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:14.532 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:14.532 ************************************ 00:20:14.532 START TEST dd_invalid_count 00:20:14.532 ************************************ 00:20:14.532 21:33:35 -- common/autotest_common.sh@1104 -- # invalid_count 00:20:14.532 21:33:35 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:20:14.532 21:33:35 -- common/autotest_common.sh@640 -- # local es=0 00:20:14.532 21:33:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:20:14.532 21:33:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.532 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.532 21:33:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.532 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.532 21:33:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.532 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.532 21:33:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.532 21:33:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:14.532 21:33:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:20:14.790 [2024-07-11 21:33:35.502207] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:20:14.790 21:33:35 -- common/autotest_common.sh@643 -- # es=22 00:20:14.790 21:33:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:14.790 21:33:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:14.790 21:33:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:14.790 00:20:14.790 real 0m0.072s 00:20:14.790 user 0m0.043s 00:20:14.790 sys 0m0.028s 00:20:14.790 21:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.790 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:14.790 ************************************ 00:20:14.790 END TEST dd_invalid_count 00:20:14.790 ************************************ 00:20:14.790 21:33:35 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:20:14.790 21:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:14.790 21:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:14.790 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:14.790 ************************************ 00:20:14.790 START TEST dd_invalid_oflag 00:20:14.790 ************************************ 00:20:14.790 21:33:35 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:20:14.790 21:33:35 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:20:14.790 21:33:35 -- common/autotest_common.sh@640 -- # local es=0 00:20:14.790 21:33:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:20:14.790 21:33:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.790 21:33:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.790 21:33:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:14.790 21:33:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:20:14.790 [2024-07-11 21:33:35.621147] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:20:14.790 21:33:35 -- common/autotest_common.sh@643 -- # es=22 00:20:14.790 21:33:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:14.790 21:33:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:14.790 21:33:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:14.790 00:20:14.790 real 0m0.070s 00:20:14.790 user 0m0.047s 00:20:14.790 sys 0m0.023s 00:20:14.790 21:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.790 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:14.790 ************************************ 00:20:14.790 END TEST dd_invalid_oflag 00:20:14.790 ************************************ 00:20:14.790 21:33:35 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:20:14.790 21:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:14.790 21:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:14.790 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:14.790 ************************************ 00:20:14.790 START TEST dd_invalid_iflag 00:20:14.790 ************************************ 00:20:14.790 21:33:35 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:20:14.790 21:33:35 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:20:14.790 21:33:35 -- common/autotest_common.sh@640 -- # local es=0 00:20:14.790 21:33:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:20:14.790 21:33:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.790 21:33:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:14.790 21:33:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:14.790 21:33:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:14.790 21:33:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:20:14.790 [2024-07-11 21:33:35.734986] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:20:15.048 21:33:35 -- common/autotest_common.sh@643 -- # es=22 00:20:15.048 21:33:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:15.048 21:33:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:15.048 21:33:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:15.048 00:20:15.048 real 0m0.060s 00:20:15.048 user 0m0.039s 00:20:15.048 sys 0m0.020s 00:20:15.048 21:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.048 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:15.048 ************************************ 00:20:15.048 END TEST dd_invalid_iflag 00:20:15.048 ************************************ 00:20:15.048 21:33:35 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:20:15.048 21:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:15.048 21:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:15.048 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:20:15.048 ************************************ 00:20:15.048 START TEST dd_unknown_flag 00:20:15.048 ************************************ 00:20:15.048 21:33:35 -- common/autotest_common.sh@1104 -- # unknown_flag 00:20:15.048 21:33:35 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:20:15.048 21:33:35 -- common/autotest_common.sh@640 -- # local es=0 00:20:15.048 21:33:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:20:15.048 21:33:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.048 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.048 21:33:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.048 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.048 21:33:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.048 21:33:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.048 21:33:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.048 21:33:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:15.048 21:33:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:20:15.048 [2024-07-11 21:33:35.852786] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:15.048 [2024-07-11 21:33:35.852899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71949 ] 00:20:15.048 [2024-07-11 21:33:35.995214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.306 [2024-07-11 21:33:36.098881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.306 [2024-07-11 21:33:36.191443] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:20:15.306 [2024-07-11 21:33:36.191537] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:20:15.306 [2024-07-11 21:33:36.191553] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:20:15.306 [2024-07-11 21:33:36.191568] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:15.565 [2024-07-11 21:33:36.302295] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:15.565 21:33:36 -- common/autotest_common.sh@643 -- # es=236 00:20:15.565 21:33:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:15.565 21:33:36 -- common/autotest_common.sh@652 -- # es=108 00:20:15.565 21:33:36 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:15.565 21:33:36 -- common/autotest_common.sh@660 -- # es=1 00:20:15.565 21:33:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:15.565 00:20:15.565 real 0m0.590s 00:20:15.565 user 0m0.325s 00:20:15.565 sys 0m0.158s 00:20:15.565 21:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.565 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:20:15.565 ************************************ 00:20:15.565 END TEST dd_unknown_flag 00:20:15.565 ************************************ 00:20:15.565 21:33:36 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:20:15.565 21:33:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:15.565 21:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:15.565 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:20:15.565 ************************************ 00:20:15.565 START TEST dd_invalid_json 00:20:15.565 ************************************ 00:20:15.565 21:33:36 -- common/autotest_common.sh@1104 -- # invalid_json 00:20:15.565 21:33:36 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:20:15.565 21:33:36 -- dd/negative_dd.sh@95 -- # : 00:20:15.565 21:33:36 -- common/autotest_common.sh@640 -- # local es=0 00:20:15.565 21:33:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:20:15.565 21:33:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.565 21:33:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.565 21:33:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.565 21:33:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.565 21:33:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.565 21:33:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.565 21:33:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.565 21:33:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:20:15.565 21:33:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:20:15.565 [2024-07-11 21:33:36.496266] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:15.565 [2024-07-11 21:33:36.496376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71971 ] 00:20:15.824 [2024-07-11 21:33:36.638215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.824 [2024-07-11 21:33:36.737519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.824 [2024-07-11 21:33:36.737677] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:20:15.824 [2024-07-11 21:33:36.737702] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:15.824 [2024-07-11 21:33:36.737750] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:20:16.083 21:33:36 -- common/autotest_common.sh@643 -- # es=234 00:20:16.083 21:33:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:16.083 21:33:36 -- common/autotest_common.sh@652 -- # es=106 00:20:16.083 21:33:36 -- common/autotest_common.sh@653 -- # case "$es" in 00:20:16.083 21:33:36 -- common/autotest_common.sh@660 -- # es=1 00:20:16.083 21:33:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:16.083 00:20:16.083 real 0m0.390s 00:20:16.083 user 0m0.213s 00:20:16.083 sys 0m0.074s 00:20:16.083 21:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.083 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.083 ************************************ 00:20:16.083 END TEST dd_invalid_json 00:20:16.083 ************************************ 00:20:16.083 00:20:16.083 real 0m2.905s 00:20:16.083 user 0m1.499s 00:20:16.083 sys 0m1.043s 00:20:16.083 21:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.083 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.083 ************************************ 00:20:16.083 END TEST spdk_dd_negative 00:20:16.083 ************************************ 00:20:16.083 00:20:16.083 real 1m19.207s 00:20:16.083 user 0m49.151s 00:20:16.083 sys 0m20.692s 00:20:16.083 21:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.083 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.083 ************************************ 00:20:16.083 END TEST spdk_dd 00:20:16.083 ************************************ 00:20:16.083 21:33:36 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:20:16.083 21:33:36 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:20:16.083 21:33:36 -- spdk/autotest.sh@268 -- # timing_exit lib 00:20:16.083 21:33:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:16.083 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.083 21:33:36 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:20:16.083 21:33:36 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:20:16.083 21:33:36 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:20:16.083 21:33:36 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:20:16.083 21:33:36 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:20:16.083 21:33:36 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:20:16.083 21:33:36 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:20:16.083 21:33:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:16.083 21:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:16.083 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.083 ************************************ 00:20:16.083 START TEST nvmf_tcp 00:20:16.083 ************************************ 00:20:16.083 21:33:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:20:16.380 * Looking for test storage... 00:20:16.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@10 -- # uname -s 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.380 21:33:37 -- nvmf/common.sh@7 -- # uname -s 00:20:16.380 21:33:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.380 21:33:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.380 21:33:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.380 21:33:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.380 21:33:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.380 21:33:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.380 21:33:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.380 21:33:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.380 21:33:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.380 21:33:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.380 21:33:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:16.380 21:33:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:16.380 21:33:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.380 21:33:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.380 21:33:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.380 21:33:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.380 21:33:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.380 21:33:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.380 21:33:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.380 21:33:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- paths/export.sh@5 -- # export PATH 00:20:16.380 21:33:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- nvmf/common.sh@46 -- # : 0 00:20:16.380 21:33:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:16.380 21:33:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:16.380 21:33:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:16.380 21:33:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.380 21:33:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.380 21:33:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:16.380 21:33:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:16.380 21:33:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:20:16.380 21:33:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:16.380 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:20:16.380 21:33:37 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:16.380 21:33:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:16.380 21:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:16.380 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:20:16.380 ************************************ 00:20:16.380 START TEST nvmf_host_management 00:20:16.380 ************************************ 00:20:16.380 21:33:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:16.380 * Looking for test storage... 00:20:16.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:16.380 21:33:37 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.380 21:33:37 -- nvmf/common.sh@7 -- # uname -s 00:20:16.380 21:33:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.380 21:33:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.380 21:33:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.380 21:33:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.380 21:33:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.380 21:33:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.380 21:33:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.380 21:33:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.380 21:33:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.380 21:33:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.380 21:33:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:16.380 21:33:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:16.380 21:33:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.380 21:33:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.380 21:33:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.380 21:33:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.380 21:33:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.380 21:33:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.380 21:33:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.380 21:33:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- paths/export.sh@5 -- # export PATH 00:20:16.380 21:33:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.380 21:33:37 -- nvmf/common.sh@46 -- # : 0 00:20:16.380 21:33:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:16.380 21:33:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:16.380 21:33:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:16.380 21:33:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.380 21:33:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.380 21:33:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:16.380 21:33:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:16.380 21:33:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:16.380 21:33:37 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.380 21:33:37 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.380 21:33:37 -- target/host_management.sh@104 -- # nvmftestinit 00:20:16.380 21:33:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:16.380 21:33:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.380 21:33:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:16.380 21:33:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:16.380 21:33:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:16.380 21:33:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.380 21:33:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.380 21:33:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.380 21:33:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:16.380 21:33:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:16.380 21:33:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:16.380 21:33:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:16.380 21:33:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:16.380 21:33:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:16.380 21:33:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.380 21:33:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.380 21:33:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:16.381 21:33:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:16.381 21:33:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:16.381 21:33:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:16.381 21:33:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:16.381 21:33:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.381 21:33:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:16.381 21:33:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:16.381 21:33:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:16.381 21:33:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:16.381 21:33:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:16.381 Cannot find device "nvmf_init_br" 00:20:16.381 21:33:37 -- nvmf/common.sh@153 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:16.381 Cannot find device "nvmf_tgt_br" 00:20:16.381 21:33:37 -- nvmf/common.sh@154 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.381 Cannot find device "nvmf_tgt_br2" 00:20:16.381 21:33:37 -- nvmf/common.sh@155 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:16.381 Cannot find device "nvmf_init_br" 00:20:16.381 21:33:37 -- nvmf/common.sh@156 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:16.381 Cannot find device "nvmf_tgt_br" 00:20:16.381 21:33:37 -- nvmf/common.sh@157 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:16.381 Cannot find device "nvmf_tgt_br2" 00:20:16.381 21:33:37 -- nvmf/common.sh@158 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:16.381 Cannot find device "nvmf_br" 00:20:16.381 21:33:37 -- nvmf/common.sh@159 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:16.381 Cannot find device "nvmf_init_if" 00:20:16.381 21:33:37 -- nvmf/common.sh@160 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.381 21:33:37 -- nvmf/common.sh@161 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.381 21:33:37 -- nvmf/common.sh@162 -- # true 00:20:16.381 21:33:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:16.381 21:33:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:16.639 21:33:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:16.639 21:33:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:16.639 21:33:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:16.639 21:33:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:16.639 21:33:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:16.639 21:33:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:16.639 21:33:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:16.639 21:33:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:16.639 21:33:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:16.639 21:33:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:16.639 21:33:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:16.639 21:33:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:16.639 21:33:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:16.639 21:33:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:16.639 21:33:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:16.639 21:33:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:16.639 21:33:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:16.639 21:33:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:16.639 21:33:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:16.639 21:33:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:16.898 21:33:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.898 21:33:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:16.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:20:16.898 00:20:16.898 --- 10.0.0.2 ping statistics --- 00:20:16.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.898 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:20:16.898 21:33:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:16.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:20:16.898 00:20:16.898 --- 10.0.0.3 ping statistics --- 00:20:16.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.898 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:16.898 21:33:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:20:16.898 00:20:16.898 --- 10.0.0.1 ping statistics --- 00:20:16.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.898 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:16.898 21:33:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.898 21:33:37 -- nvmf/common.sh@421 -- # return 0 00:20:16.898 21:33:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:16.898 21:33:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.898 21:33:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:16.898 21:33:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:16.898 21:33:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.898 21:33:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:16.898 21:33:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:16.898 21:33:37 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:20:16.898 21:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:16.898 21:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:16.898 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 ************************************ 00:20:16.898 START TEST nvmf_host_management 00:20:16.898 ************************************ 00:20:16.898 21:33:37 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:20:16.898 21:33:37 -- target/host_management.sh@69 -- # starttarget 00:20:16.898 21:33:37 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:16.898 21:33:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:16.898 21:33:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:16.898 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 21:33:37 -- nvmf/common.sh@469 -- # nvmfpid=72228 00:20:16.898 21:33:37 -- nvmf/common.sh@470 -- # waitforlisten 72228 00:20:16.898 21:33:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:16.898 21:33:37 -- common/autotest_common.sh@819 -- # '[' -z 72228 ']' 00:20:16.898 21:33:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.898 21:33:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:16.898 21:33:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.898 21:33:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:16.898 21:33:37 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 [2024-07-11 21:33:37.729761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:16.898 [2024-07-11 21:33:37.729863] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.156 [2024-07-11 21:33:37.875273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.156 [2024-07-11 21:33:37.973294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:17.156 [2024-07-11 21:33:37.973477] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.157 [2024-07-11 21:33:37.973512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.157 [2024-07-11 21:33:37.973525] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.157 [2024-07-11 21:33:37.973686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.157 [2024-07-11 21:33:37.974164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.157 [2024-07-11 21:33:37.974412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:17.157 [2024-07-11 21:33:37.974418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.091 21:33:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.091 21:33:38 -- common/autotest_common.sh@852 -- # return 0 00:20:18.091 21:33:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:18.091 21:33:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:18.091 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.091 21:33:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.091 21:33:38 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.091 21:33:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.091 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.091 [2024-07-11 21:33:38.725110] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.091 21:33:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.091 21:33:38 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:18.091 21:33:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:18.091 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.091 21:33:38 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:18.091 21:33:38 -- target/host_management.sh@23 -- # cat 00:20:18.091 21:33:38 -- target/host_management.sh@30 -- # rpc_cmd 00:20:18.091 21:33:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.091 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.091 Malloc0 00:20:18.091 [2024-07-11 21:33:38.796802] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.091 21:33:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.091 21:33:38 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:18.091 21:33:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:18.091 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.091 21:33:38 -- target/host_management.sh@73 -- # perfpid=72292 00:20:18.091 21:33:38 -- target/host_management.sh@74 -- # waitforlisten 72292 /var/tmp/bdevperf.sock 00:20:18.091 21:33:38 -- common/autotest_common.sh@819 -- # '[' -z 72292 ']' 00:20:18.091 21:33:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.091 21:33:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.091 21:33:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.091 21:33:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.091 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.091 21:33:38 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:18.091 21:33:38 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:18.091 21:33:38 -- nvmf/common.sh@520 -- # config=() 00:20:18.091 21:33:38 -- nvmf/common.sh@520 -- # local subsystem config 00:20:18.091 21:33:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:18.091 21:33:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:18.091 { 00:20:18.091 "params": { 00:20:18.091 "name": "Nvme$subsystem", 00:20:18.091 "trtype": "$TEST_TRANSPORT", 00:20:18.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.091 "adrfam": "ipv4", 00:20:18.091 "trsvcid": "$NVMF_PORT", 00:20:18.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.091 "hdgst": ${hdgst:-false}, 00:20:18.091 "ddgst": ${ddgst:-false} 00:20:18.091 }, 00:20:18.091 "method": "bdev_nvme_attach_controller" 00:20:18.091 } 00:20:18.091 EOF 00:20:18.091 )") 00:20:18.091 21:33:38 -- nvmf/common.sh@542 -- # cat 00:20:18.091 21:33:38 -- nvmf/common.sh@544 -- # jq . 00:20:18.091 21:33:38 -- nvmf/common.sh@545 -- # IFS=, 00:20:18.091 21:33:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:18.091 "params": { 00:20:18.091 "name": "Nvme0", 00:20:18.091 "trtype": "tcp", 00:20:18.091 "traddr": "10.0.0.2", 00:20:18.091 "adrfam": "ipv4", 00:20:18.091 "trsvcid": "4420", 00:20:18.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.091 "hdgst": false, 00:20:18.091 "ddgst": false 00:20:18.091 }, 00:20:18.091 "method": "bdev_nvme_attach_controller" 00:20:18.091 }' 00:20:18.091 [2024-07-11 21:33:38.899042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:18.091 [2024-07-11 21:33:38.899155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72292 ] 00:20:18.091 [2024-07-11 21:33:39.040687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.348 [2024-07-11 21:33:39.138331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.606 Running I/O for 10 seconds... 00:20:19.173 21:33:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.173 21:33:39 -- common/autotest_common.sh@852 -- # return 0 00:20:19.173 21:33:39 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:19.173 21:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.173 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:19.173 21:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.173 21:33:39 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.173 21:33:39 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:19.173 21:33:39 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:19.173 21:33:39 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:19.173 21:33:39 -- target/host_management.sh@52 -- # local ret=1 00:20:19.173 21:33:39 -- target/host_management.sh@53 -- # local i 00:20:19.173 21:33:39 -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:19.173 21:33:39 -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:19.173 21:33:39 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:19.173 21:33:39 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:19.173 21:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.173 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:19.173 21:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.173 21:33:39 -- target/host_management.sh@55 -- # read_io_count=1710 00:20:19.173 21:33:39 -- target/host_management.sh@58 -- # '[' 1710 -ge 100 ']' 00:20:19.173 21:33:39 -- target/host_management.sh@59 -- # ret=0 00:20:19.173 21:33:39 -- target/host_management.sh@60 -- # break 00:20:19.173 21:33:39 -- target/host_management.sh@64 -- # return 0 00:20:19.173 21:33:39 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:19.173 21:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.173 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:19.173 [2024-07-11 21:33:39.967093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.173 [2024-07-11 21:33:39.967853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.173 [2024-07-11 21:33:39.967865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.967874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.967887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.967897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.967915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.967926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.967937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.967947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.967959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.967968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.967980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.967989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.174 [2024-07-11 21:33:39.968597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b56d0 is same with the state(5) to be set 00:20:19.174 [2024-07-11 21:33:39.968686] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7b56d0 was disconnected and freed. reset controller. 00:20:19.174 [2024-07-11 21:33:39.968804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.174 [2024-07-11 21:33:39.968821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.174 [2024-07-11 21:33:39.968842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.174 [2024-07-11 21:33:39.968861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.174 [2024-07-11 21:33:39.968879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.174 [2024-07-11 21:33:39.968888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82b370 is same with the state(5) to be set 00:20:19.174 [2024-07-11 21:33:39.969992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.174 task offset: 103936 on job bdev=Nvme0n1 fails 00:20:19.174 00:20:19.174 Latency(us) 00:20:19.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.174 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:19.174 Job: Nvme0n1 ended in about 0.66 seconds with error 00:20:19.174 Verification LBA range: start 0x0 length 0x400 00:20:19.174 Nvme0n1 : 0.66 2768.96 173.06 97.37 0.00 21932.12 2204.39 30265.72 00:20:19.174 =================================================================================================================== 00:20:19.174 Total : 2768.96 173.06 97.37 0.00 21932.12 2204.39 30265.72 00:20:19.174 21:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.174 [2024-07-11 21:33:39.972218] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:19.175 [2024-07-11 21:33:39.972249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82b370 (9): Bad file descriptor 00:20:19.175 21:33:39 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:19.175 21:33:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.175 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:20:19.175 [2024-07-11 21:33:39.977723] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:20:19.175 [2024-07-11 21:33:39.977840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:19.175 [2024-07-11 21:33:39.977865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.175 [2024-07-11 21:33:39.977884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:20:19.175 [2024-07-11 21:33:39.977895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:20:19.175 [2024-07-11 21:33:39.977905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:19.175 [2024-07-11 21:33:39.977914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x82b370 00:20:19.175 [2024-07-11 21:33:39.977948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82b370 (9): Bad file descriptor 00:20:19.175 [2024-07-11 21:33:39.977968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:19.175 [2024-07-11 21:33:39.977978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:19.175 [2024-07-11 21:33:39.977989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:19.175 [2024-07-11 21:33:39.978005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.175 21:33:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.175 21:33:39 -- target/host_management.sh@87 -- # sleep 1 00:20:20.135 21:33:40 -- target/host_management.sh@91 -- # kill -9 72292 00:20:20.135 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72292) - No such process 00:20:20.135 21:33:40 -- target/host_management.sh@91 -- # true 00:20:20.135 21:33:40 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:20.135 21:33:40 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:20.135 21:33:40 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:20.135 21:33:40 -- nvmf/common.sh@520 -- # config=() 00:20:20.135 21:33:40 -- nvmf/common.sh@520 -- # local subsystem config 00:20:20.135 21:33:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:20.135 21:33:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:20.135 { 00:20:20.135 "params": { 00:20:20.135 "name": "Nvme$subsystem", 00:20:20.135 "trtype": "$TEST_TRANSPORT", 00:20:20.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.135 "adrfam": "ipv4", 00:20:20.135 "trsvcid": "$NVMF_PORT", 00:20:20.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.135 "hdgst": ${hdgst:-false}, 00:20:20.135 "ddgst": ${ddgst:-false} 00:20:20.135 }, 00:20:20.135 "method": "bdev_nvme_attach_controller" 00:20:20.135 } 00:20:20.135 EOF 00:20:20.135 )") 00:20:20.135 21:33:40 -- nvmf/common.sh@542 -- # cat 00:20:20.135 21:33:41 -- nvmf/common.sh@544 -- # jq . 00:20:20.135 21:33:41 -- nvmf/common.sh@545 -- # IFS=, 00:20:20.135 21:33:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:20.135 "params": { 00:20:20.135 "name": "Nvme0", 00:20:20.135 "trtype": "tcp", 00:20:20.135 "traddr": "10.0.0.2", 00:20:20.135 "adrfam": "ipv4", 00:20:20.135 "trsvcid": "4420", 00:20:20.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:20.135 "hdgst": false, 00:20:20.135 "ddgst": false 00:20:20.135 }, 00:20:20.135 "method": "bdev_nvme_attach_controller" 00:20:20.135 }' 00:20:20.418 [2024-07-11 21:33:41.065180] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:20.418 [2024-07-11 21:33:41.065323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72331 ] 00:20:20.418 [2024-07-11 21:33:41.206823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.418 [2024-07-11 21:33:41.303789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.676 Running I/O for 1 seconds... 00:20:21.611 00:20:21.611 Latency(us) 00:20:21.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.611 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.611 Verification LBA range: start 0x0 length 0x400 00:20:21.611 Nvme0n1 : 1.02 2772.60 173.29 0.00 0.00 22686.24 1414.98 32648.84 00:20:21.611 =================================================================================================================== 00:20:21.611 Total : 2772.60 173.29 0.00 0.00 22686.24 1414.98 32648.84 00:20:21.868 21:33:42 -- target/host_management.sh@101 -- # stoptarget 00:20:21.868 21:33:42 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:21.868 21:33:42 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:20:21.868 21:33:42 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:21.868 21:33:42 -- target/host_management.sh@40 -- # nvmftestfini 00:20:21.868 21:33:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:21.868 21:33:42 -- nvmf/common.sh@116 -- # sync 00:20:22.127 21:33:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:22.127 21:33:42 -- nvmf/common.sh@119 -- # set +e 00:20:22.127 21:33:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:22.127 21:33:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:22.127 rmmod nvme_tcp 00:20:22.127 rmmod nvme_fabrics 00:20:22.127 rmmod nvme_keyring 00:20:22.127 21:33:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:22.127 21:33:42 -- nvmf/common.sh@123 -- # set -e 00:20:22.127 21:33:42 -- nvmf/common.sh@124 -- # return 0 00:20:22.127 21:33:42 -- nvmf/common.sh@477 -- # '[' -n 72228 ']' 00:20:22.127 21:33:42 -- nvmf/common.sh@478 -- # killprocess 72228 00:20:22.127 21:33:42 -- common/autotest_common.sh@926 -- # '[' -z 72228 ']' 00:20:22.127 21:33:42 -- common/autotest_common.sh@930 -- # kill -0 72228 00:20:22.127 21:33:42 -- common/autotest_common.sh@931 -- # uname 00:20:22.127 21:33:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.127 21:33:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72228 00:20:22.127 21:33:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:22.127 killing process with pid 72228 00:20:22.127 21:33:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:22.127 21:33:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72228' 00:20:22.127 21:33:42 -- common/autotest_common.sh@945 -- # kill 72228 00:20:22.127 21:33:42 -- common/autotest_common.sh@950 -- # wait 72228 00:20:22.386 [2024-07-11 21:33:43.155755] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:22.386 21:33:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:22.386 21:33:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:22.386 21:33:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:22.386 21:33:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.386 21:33:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:22.386 21:33:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.386 21:33:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.386 21:33:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.386 21:33:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:22.386 00:20:22.386 real 0m5.569s 00:20:22.386 user 0m23.394s 00:20:22.386 sys 0m1.360s 00:20:22.386 21:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.386 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:20:22.386 ************************************ 00:20:22.386 END TEST nvmf_host_management 00:20:22.386 ************************************ 00:20:22.386 21:33:43 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:22.386 ************************************ 00:20:22.386 END TEST nvmf_host_management 00:20:22.386 ************************************ 00:20:22.386 00:20:22.386 real 0m6.162s 00:20:22.386 user 0m23.514s 00:20:22.386 sys 0m1.611s 00:20:22.386 21:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.386 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:20:22.386 21:33:43 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:22.386 21:33:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:22.386 21:33:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:22.386 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:20:22.386 ************************************ 00:20:22.386 START TEST nvmf_lvol 00:20:22.386 ************************************ 00:20:22.386 21:33:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:22.646 * Looking for test storage... 00:20:22.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:22.646 21:33:43 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.646 21:33:43 -- nvmf/common.sh@7 -- # uname -s 00:20:22.646 21:33:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.646 21:33:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.646 21:33:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.646 21:33:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.646 21:33:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.646 21:33:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.646 21:33:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.646 21:33:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.646 21:33:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.646 21:33:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.646 21:33:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:22.646 21:33:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:22.646 21:33:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.646 21:33:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.646 21:33:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.646 21:33:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.646 21:33:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.646 21:33:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.646 21:33:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.646 21:33:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.646 21:33:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.646 21:33:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.646 21:33:43 -- paths/export.sh@5 -- # export PATH 00:20:22.646 21:33:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.646 21:33:43 -- nvmf/common.sh@46 -- # : 0 00:20:22.646 21:33:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:22.646 21:33:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:22.646 21:33:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:22.646 21:33:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.646 21:33:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.646 21:33:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:22.646 21:33:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:22.646 21:33:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:22.646 21:33:43 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:22.646 21:33:43 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:22.646 21:33:43 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:22.646 21:33:43 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:22.646 21:33:43 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.646 21:33:43 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:22.646 21:33:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:22.646 21:33:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.646 21:33:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:22.646 21:33:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:22.646 21:33:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:22.646 21:33:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.646 21:33:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.646 21:33:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.646 21:33:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:22.646 21:33:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:22.647 21:33:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:22.647 21:33:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:22.647 21:33:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:22.647 21:33:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:22.647 21:33:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.647 21:33:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.647 21:33:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:22.647 21:33:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:22.647 21:33:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.647 21:33:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.647 21:33:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.647 21:33:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.647 21:33:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.647 21:33:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.647 21:33:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.647 21:33:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.647 21:33:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:22.647 21:33:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:22.647 Cannot find device "nvmf_tgt_br" 00:20:22.647 21:33:43 -- nvmf/common.sh@154 -- # true 00:20:22.647 21:33:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.647 Cannot find device "nvmf_tgt_br2" 00:20:22.647 21:33:43 -- nvmf/common.sh@155 -- # true 00:20:22.647 21:33:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:22.647 21:33:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:22.647 Cannot find device "nvmf_tgt_br" 00:20:22.647 21:33:43 -- nvmf/common.sh@157 -- # true 00:20:22.647 21:33:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:22.647 Cannot find device "nvmf_tgt_br2" 00:20:22.647 21:33:43 -- nvmf/common.sh@158 -- # true 00:20:22.647 21:33:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:22.647 21:33:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:22.647 21:33:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.647 21:33:43 -- nvmf/common.sh@161 -- # true 00:20:22.647 21:33:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.647 21:33:43 -- nvmf/common.sh@162 -- # true 00:20:22.647 21:33:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.647 21:33:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.647 21:33:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.647 21:33:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.647 21:33:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.647 21:33:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.905 21:33:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.905 21:33:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:22.905 21:33:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:22.905 21:33:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:22.905 21:33:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:22.905 21:33:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:22.905 21:33:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:22.905 21:33:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.905 21:33:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.905 21:33:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.905 21:33:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:22.905 21:33:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:22.905 21:33:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:22.905 21:33:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:22.905 21:33:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.905 21:33:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.905 21:33:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.905 21:33:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:22.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:22.905 00:20:22.905 --- 10.0.0.2 ping statistics --- 00:20:22.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.905 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:22.905 21:33:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:22.905 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.905 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:20:22.905 00:20:22.905 --- 10.0.0.3 ping statistics --- 00:20:22.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.905 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:22.905 21:33:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:22.905 00:20:22.905 --- 10.0.0.1 ping statistics --- 00:20:22.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.905 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:22.905 21:33:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.905 21:33:43 -- nvmf/common.sh@421 -- # return 0 00:20:22.905 21:33:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:22.905 21:33:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.905 21:33:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:22.905 21:33:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:22.905 21:33:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.905 21:33:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:22.905 21:33:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:22.905 21:33:43 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:22.905 21:33:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:22.905 21:33:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:22.905 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:20:22.905 21:33:43 -- nvmf/common.sh@469 -- # nvmfpid=72550 00:20:22.905 21:33:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:22.905 21:33:43 -- nvmf/common.sh@470 -- # waitforlisten 72550 00:20:22.905 21:33:43 -- common/autotest_common.sh@819 -- # '[' -z 72550 ']' 00:20:22.905 21:33:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.905 21:33:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.905 21:33:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.905 21:33:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.905 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:20:23.162 [2024-07-11 21:33:43.858860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:23.162 [2024-07-11 21:33:43.858958] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.162 [2024-07-11 21:33:43.995820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:23.162 [2024-07-11 21:33:44.096403] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:23.162 [2024-07-11 21:33:44.096863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.162 [2024-07-11 21:33:44.097014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.162 [2024-07-11 21:33:44.097165] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.162 [2024-07-11 21:33:44.097358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.162 [2024-07-11 21:33:44.097520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.162 [2024-07-11 21:33:44.097529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.091 21:33:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:24.091 21:33:44 -- common/autotest_common.sh@852 -- # return 0 00:20:24.091 21:33:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:24.091 21:33:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:24.091 21:33:44 -- common/autotest_common.sh@10 -- # set +x 00:20:24.091 21:33:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.091 21:33:44 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:24.348 [2024-07-11 21:33:45.094255] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.348 21:33:45 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:24.605 21:33:45 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:24.606 21:33:45 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:24.863 21:33:45 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:24.863 21:33:45 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:25.121 21:33:45 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:25.379 21:33:46 -- target/nvmf_lvol.sh@29 -- # lvs=971604a7-9ea6-46c8-84fe-17b9f5c2cd2d 00:20:25.379 21:33:46 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 971604a7-9ea6-46c8-84fe-17b9f5c2cd2d lvol 20 00:20:25.637 21:33:46 -- target/nvmf_lvol.sh@32 -- # lvol=dc3a78c1-bc54-4c5b-b681-7f2060d1f5a7 00:20:25.637 21:33:46 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:25.895 21:33:46 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc3a78c1-bc54-4c5b-b681-7f2060d1f5a7 00:20:26.152 21:33:46 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:26.409 [2024-07-11 21:33:47.242385] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.410 21:33:47 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:26.667 21:33:47 -- target/nvmf_lvol.sh@42 -- # perf_pid=72631 00:20:26.667 21:33:47 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:26.667 21:33:47 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:27.598 21:33:48 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot dc3a78c1-bc54-4c5b-b681-7f2060d1f5a7 MY_SNAPSHOT 00:20:27.855 21:33:48 -- target/nvmf_lvol.sh@47 -- # snapshot=e46e2933-6807-4b93-a922-08e836dd2ba8 00:20:27.855 21:33:48 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize dc3a78c1-bc54-4c5b-b681-7f2060d1f5a7 30 00:20:28.113 21:33:49 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e46e2933-6807-4b93-a922-08e836dd2ba8 MY_CLONE 00:20:28.371 21:33:49 -- target/nvmf_lvol.sh@49 -- # clone=0a9b1915-b604-4e78-a763-02000cc9186b 00:20:28.371 21:33:49 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0a9b1915-b604-4e78-a763-02000cc9186b 00:20:28.938 21:33:49 -- target/nvmf_lvol.sh@53 -- # wait 72631 00:20:37.105 Initializing NVMe Controllers 00:20:37.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:37.105 Controller IO queue size 128, less than required. 00:20:37.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:20:37.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:20:37.105 Initialization complete. Launching workers. 00:20:37.105 ======================================================== 00:20:37.105 Latency(us) 00:20:37.105 Device Information : IOPS MiB/s Average min max 00:20:37.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9739.60 38.05 13149.47 2134.68 62110.73 00:20:37.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9898.30 38.67 12937.47 218.06 65441.88 00:20:37.105 ======================================================== 00:20:37.105 Total : 19637.89 76.71 13042.61 218.06 65441.88 00:20:37.105 00:20:37.105 21:33:57 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.364 21:33:58 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dc3a78c1-bc54-4c5b-b681-7f2060d1f5a7 00:20:37.623 21:33:58 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 971604a7-9ea6-46c8-84fe-17b9f5c2cd2d 00:20:37.882 21:33:58 -- target/nvmf_lvol.sh@60 -- # rm -f 00:20:37.882 21:33:58 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:20:37.882 21:33:58 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:20:37.882 21:33:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:37.882 21:33:58 -- nvmf/common.sh@116 -- # sync 00:20:37.882 21:33:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:37.882 21:33:58 -- nvmf/common.sh@119 -- # set +e 00:20:37.882 21:33:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:37.882 21:33:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:37.882 rmmod nvme_tcp 00:20:37.882 rmmod nvme_fabrics 00:20:37.882 rmmod nvme_keyring 00:20:37.882 21:33:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:37.882 21:33:58 -- nvmf/common.sh@123 -- # set -e 00:20:37.882 21:33:58 -- nvmf/common.sh@124 -- # return 0 00:20:37.882 21:33:58 -- nvmf/common.sh@477 -- # '[' -n 72550 ']' 00:20:37.882 21:33:58 -- nvmf/common.sh@478 -- # killprocess 72550 00:20:37.882 21:33:58 -- common/autotest_common.sh@926 -- # '[' -z 72550 ']' 00:20:37.882 21:33:58 -- common/autotest_common.sh@930 -- # kill -0 72550 00:20:37.882 21:33:58 -- common/autotest_common.sh@931 -- # uname 00:20:37.882 21:33:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.882 21:33:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72550 00:20:37.882 killing process with pid 72550 00:20:37.882 21:33:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:37.882 21:33:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:37.882 21:33:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72550' 00:20:37.882 21:33:58 -- common/autotest_common.sh@945 -- # kill 72550 00:20:37.882 21:33:58 -- common/autotest_common.sh@950 -- # wait 72550 00:20:38.140 21:33:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:38.140 21:33:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:38.140 21:33:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:38.140 21:33:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.140 21:33:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:38.140 21:33:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.140 21:33:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.140 21:33:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.140 21:33:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:38.140 ************************************ 00:20:38.140 END TEST nvmf_lvol 00:20:38.140 ************************************ 00:20:38.140 00:20:38.140 real 0m15.733s 00:20:38.140 user 1m4.794s 00:20:38.140 sys 0m4.801s 00:20:38.140 21:33:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:38.140 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:20:38.399 21:33:59 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:38.399 21:33:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:38.399 21:33:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:38.399 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:20:38.399 ************************************ 00:20:38.399 START TEST nvmf_lvs_grow 00:20:38.399 ************************************ 00:20:38.399 21:33:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:38.399 * Looking for test storage... 00:20:38.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:38.399 21:33:59 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.399 21:33:59 -- nvmf/common.sh@7 -- # uname -s 00:20:38.399 21:33:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.399 21:33:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.399 21:33:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.399 21:33:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.399 21:33:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.399 21:33:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.399 21:33:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.399 21:33:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.399 21:33:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.399 21:33:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.399 21:33:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:38.399 21:33:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:20:38.399 21:33:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.399 21:33:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.399 21:33:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.399 21:33:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.399 21:33:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.399 21:33:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.399 21:33:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.399 21:33:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.399 21:33:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.399 21:33:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.399 21:33:59 -- paths/export.sh@5 -- # export PATH 00:20:38.399 21:33:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.399 21:33:59 -- nvmf/common.sh@46 -- # : 0 00:20:38.399 21:33:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:38.399 21:33:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:38.399 21:33:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:38.399 21:33:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.399 21:33:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.399 21:33:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:38.399 21:33:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:38.399 21:33:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:38.399 21:33:59 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.399 21:33:59 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.399 21:33:59 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:20:38.399 21:33:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:38.399 21:33:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.399 21:33:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:38.399 21:33:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:38.399 21:33:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:38.399 21:33:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.399 21:33:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.399 21:33:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.399 21:33:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:38.399 21:33:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:38.399 21:33:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:38.399 21:33:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:38.399 21:33:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:38.399 21:33:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:38.399 21:33:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.399 21:33:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.399 21:33:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:38.399 21:33:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:38.399 21:33:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.399 21:33:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.399 21:33:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.399 21:33:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.399 21:33:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.399 21:33:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.399 21:33:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.399 21:33:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.399 21:33:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:38.399 21:33:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:38.399 Cannot find device "nvmf_tgt_br" 00:20:38.399 21:33:59 -- nvmf/common.sh@154 -- # true 00:20:38.399 21:33:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.399 Cannot find device "nvmf_tgt_br2" 00:20:38.399 21:33:59 -- nvmf/common.sh@155 -- # true 00:20:38.399 21:33:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:38.399 21:33:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:38.399 Cannot find device "nvmf_tgt_br" 00:20:38.399 21:33:59 -- nvmf/common.sh@157 -- # true 00:20:38.399 21:33:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:38.399 Cannot find device "nvmf_tgt_br2" 00:20:38.399 21:33:59 -- nvmf/common.sh@158 -- # true 00:20:38.399 21:33:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:38.399 21:33:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:38.399 21:33:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.399 21:33:59 -- nvmf/common.sh@161 -- # true 00:20:38.399 21:33:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.399 21:33:59 -- nvmf/common.sh@162 -- # true 00:20:38.399 21:33:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.399 21:33:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.658 21:33:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.658 21:33:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.658 21:33:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.658 21:33:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.658 21:33:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.658 21:33:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:38.658 21:33:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:38.658 21:33:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:38.658 21:33:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:38.658 21:33:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:38.658 21:33:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:38.658 21:33:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:38.658 21:33:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:38.658 21:33:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:38.658 21:33:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:38.658 21:33:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:38.658 21:33:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:38.658 21:33:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.658 21:33:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.658 21:33:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.658 21:33:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:38.658 21:33:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:38.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:20:38.658 00:20:38.658 --- 10.0.0.2 ping statistics --- 00:20:38.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.658 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:20:38.658 21:33:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:38.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:38.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:38.658 00:20:38.658 --- 10.0.0.3 ping statistics --- 00:20:38.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.658 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:38.658 21:33:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:38.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:38.658 00:20:38.658 --- 10.0.0.1 ping statistics --- 00:20:38.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.658 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:38.658 21:33:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.658 21:33:59 -- nvmf/common.sh@421 -- # return 0 00:20:38.658 21:33:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:38.658 21:33:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.658 21:33:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:38.658 21:33:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:38.658 21:33:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.658 21:33:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:38.658 21:33:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:38.658 21:33:59 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:20:38.658 21:33:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:38.658 21:33:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:38.658 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:20:38.658 21:33:59 -- nvmf/common.sh@469 -- # nvmfpid=72954 00:20:38.658 21:33:59 -- nvmf/common.sh@470 -- # waitforlisten 72954 00:20:38.658 21:33:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:38.658 21:33:59 -- common/autotest_common.sh@819 -- # '[' -z 72954 ']' 00:20:38.658 21:33:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.658 21:33:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:38.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.658 21:33:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.658 21:33:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:38.658 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:20:38.917 [2024-07-11 21:33:59.618962] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:38.917 [2024-07-11 21:33:59.619076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.917 [2024-07-11 21:33:59.761062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.917 [2024-07-11 21:33:59.865523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:38.917 [2024-07-11 21:33:59.865716] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.917 [2024-07-11 21:33:59.865735] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.917 [2024-07-11 21:33:59.865749] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.917 [2024-07-11 21:33:59.865793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.852 21:34:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.852 21:34:00 -- common/autotest_common.sh@852 -- # return 0 00:20:39.852 21:34:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:39.852 21:34:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:39.852 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:20:39.852 21:34:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.852 21:34:00 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:40.110 [2024-07-11 21:34:00.866675] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:20:40.110 21:34:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:40.110 21:34:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:40.110 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:20:40.110 ************************************ 00:20:40.110 START TEST lvs_grow_clean 00:20:40.110 ************************************ 00:20:40.110 21:34:00 -- common/autotest_common.sh@1104 -- # lvs_grow 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:40.110 21:34:00 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:40.368 21:34:01 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:40.368 21:34:01 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:40.627 21:34:01 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:40.627 21:34:01 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:40.627 21:34:01 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:40.885 21:34:01 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:40.885 21:34:01 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:40.885 21:34:01 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5761d4-0cea-4be7-9cee-38a866f92077 lvol 150 00:20:41.142 21:34:01 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9df4331d-8273-4818-8615-190fee57d488 00:20:41.142 21:34:01 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:41.142 21:34:01 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:41.401 [2024-07-11 21:34:02.134688] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:41.401 [2024-07-11 21:34:02.134792] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:41.401 true 00:20:41.401 21:34:02 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:41.401 21:34:02 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:41.660 21:34:02 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:41.660 21:34:02 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:41.919 21:34:02 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9df4331d-8273-4818-8615-190fee57d488 00:20:42.176 21:34:02 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.435 [2024-07-11 21:34:03.171662] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.435 21:34:03 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:42.693 21:34:03 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:42.693 21:34:03 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73041 00:20:42.693 21:34:03 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:42.693 21:34:03 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73041 /var/tmp/bdevperf.sock 00:20:42.693 21:34:03 -- common/autotest_common.sh@819 -- # '[' -z 73041 ']' 00:20:42.693 21:34:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.693 21:34:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:42.693 21:34:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.693 21:34:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:42.693 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:20:42.693 [2024-07-11 21:34:03.459403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:42.693 [2024-07-11 21:34:03.459509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73041 ] 00:20:42.693 [2024-07-11 21:34:03.599014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.951 [2024-07-11 21:34:03.703199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.520 21:34:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:43.520 21:34:04 -- common/autotest_common.sh@852 -- # return 0 00:20:43.520 21:34:04 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:44.085 Nvme0n1 00:20:44.085 21:34:04 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:44.085 [ 00:20:44.085 { 00:20:44.085 "name": "Nvme0n1", 00:20:44.085 "aliases": [ 00:20:44.085 "9df4331d-8273-4818-8615-190fee57d488" 00:20:44.085 ], 00:20:44.085 "product_name": "NVMe disk", 00:20:44.085 "block_size": 4096, 00:20:44.085 "num_blocks": 38912, 00:20:44.085 "uuid": "9df4331d-8273-4818-8615-190fee57d488", 00:20:44.085 "assigned_rate_limits": { 00:20:44.085 "rw_ios_per_sec": 0, 00:20:44.085 "rw_mbytes_per_sec": 0, 00:20:44.085 "r_mbytes_per_sec": 0, 00:20:44.085 "w_mbytes_per_sec": 0 00:20:44.085 }, 00:20:44.085 "claimed": false, 00:20:44.085 "zoned": false, 00:20:44.085 "supported_io_types": { 00:20:44.085 "read": true, 00:20:44.085 "write": true, 00:20:44.085 "unmap": true, 00:20:44.085 "write_zeroes": true, 00:20:44.085 "flush": true, 00:20:44.085 "reset": true, 00:20:44.085 "compare": true, 00:20:44.085 "compare_and_write": true, 00:20:44.085 "abort": true, 00:20:44.085 "nvme_admin": true, 00:20:44.085 "nvme_io": true 00:20:44.085 }, 00:20:44.085 "driver_specific": { 00:20:44.085 "nvme": [ 00:20:44.085 { 00:20:44.085 "trid": { 00:20:44.085 "trtype": "TCP", 00:20:44.085 "adrfam": "IPv4", 00:20:44.085 "traddr": "10.0.0.2", 00:20:44.085 "trsvcid": "4420", 00:20:44.085 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:44.085 }, 00:20:44.085 "ctrlr_data": { 00:20:44.085 "cntlid": 1, 00:20:44.085 "vendor_id": "0x8086", 00:20:44.085 "model_number": "SPDK bdev Controller", 00:20:44.085 "serial_number": "SPDK0", 00:20:44.085 "firmware_revision": "24.01.1", 00:20:44.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.085 "oacs": { 00:20:44.085 "security": 0, 00:20:44.085 "format": 0, 00:20:44.085 "firmware": 0, 00:20:44.085 "ns_manage": 0 00:20:44.085 }, 00:20:44.085 "multi_ctrlr": true, 00:20:44.085 "ana_reporting": false 00:20:44.085 }, 00:20:44.085 "vs": { 00:20:44.085 "nvme_version": "1.3" 00:20:44.085 }, 00:20:44.085 "ns_data": { 00:20:44.085 "id": 1, 00:20:44.085 "can_share": true 00:20:44.085 } 00:20:44.085 } 00:20:44.085 ], 00:20:44.085 "mp_policy": "active_passive" 00:20:44.085 } 00:20:44.085 } 00:20:44.085 ] 00:20:44.343 21:34:05 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73060 00:20:44.343 21:34:05 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:44.343 21:34:05 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.343 Running I/O for 10 seconds... 00:20:45.360 Latency(us) 00:20:45.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:45.360 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:20:45.360 =================================================================================================================== 00:20:45.360 Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:20:45.360 00:20:46.291 21:34:07 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:46.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:46.291 Nvme0n1 : 2.00 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:20:46.291 =================================================================================================================== 00:20:46.291 Total : 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:20:46.291 00:20:46.549 true 00:20:46.549 21:34:07 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:46.549 21:34:07 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:46.807 21:34:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:46.807 21:34:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:46.807 21:34:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 73060 00:20:47.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:47.372 Nvme0n1 : 3.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:20:47.372 =================================================================================================================== 00:20:47.373 Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:20:47.373 00:20:48.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:48.306 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:20:48.306 =================================================================================================================== 00:20:48.306 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:20:48.306 00:20:49.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:49.243 Nvme0n1 : 5.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:20:49.243 =================================================================================================================== 00:20:49.243 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:20:49.243 00:20:50.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:50.620 Nvme0n1 : 6.00 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:20:50.620 =================================================================================================================== 00:20:50.620 Total : 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:20:50.620 00:20:51.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:51.555 Nvme0n1 : 7.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:20:51.555 =================================================================================================================== 00:20:51.555 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:20:51.555 00:20:52.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:52.489 Nvme0n1 : 8.00 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:20:52.489 =================================================================================================================== 00:20:52.489 Total : 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:20:52.489 00:20:53.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:53.424 Nvme0n1 : 9.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:20:53.424 =================================================================================================================== 00:20:53.424 Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:20:53.424 00:20:54.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:54.360 Nvme0n1 : 10.00 6515.10 25.45 0.00 0.00 0.00 0.00 0.00 00:20:54.360 =================================================================================================================== 00:20:54.360 Total : 6515.10 25.45 0.00 0.00 0.00 0.00 0.00 00:20:54.360 00:20:54.360 00:20:54.360 Latency(us) 00:20:54.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:54.360 Nvme0n1 : 10.00 6525.36 25.49 0.00 0.00 19609.35 17396.83 48139.17 00:20:54.360 =================================================================================================================== 00:20:54.360 Total : 6525.36 25.49 0.00 0.00 19609.35 17396.83 48139.17 00:20:54.360 0 00:20:54.360 21:34:15 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73041 00:20:54.360 21:34:15 -- common/autotest_common.sh@926 -- # '[' -z 73041 ']' 00:20:54.360 21:34:15 -- common/autotest_common.sh@930 -- # kill -0 73041 00:20:54.360 21:34:15 -- common/autotest_common.sh@931 -- # uname 00:20:54.360 21:34:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:54.360 21:34:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73041 00:20:54.360 killing process with pid 73041 00:20:54.360 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.360 00:20:54.360 Latency(us) 00:20:54.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.360 =================================================================================================================== 00:20:54.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.360 21:34:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:54.360 21:34:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:54.360 21:34:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73041' 00:20:54.360 21:34:15 -- common/autotest_common.sh@945 -- # kill 73041 00:20:54.360 21:34:15 -- common/autotest_common.sh@950 -- # wait 73041 00:20:54.619 21:34:15 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:54.877 21:34:15 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:54.877 21:34:15 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:20:55.135 21:34:15 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:20:55.135 21:34:15 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:20:55.135 21:34:15 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:55.393 [2024-07-11 21:34:16.190157] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:55.393 21:34:16 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:55.393 21:34:16 -- common/autotest_common.sh@640 -- # local es=0 00:20:55.393 21:34:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:55.393 21:34:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.393 21:34:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:55.393 21:34:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.393 21:34:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:55.393 21:34:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.393 21:34:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:55.393 21:34:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.393 21:34:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:55.393 21:34:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:55.651 request: 00:20:55.651 { 00:20:55.651 "uuid": "0b5761d4-0cea-4be7-9cee-38a866f92077", 00:20:55.651 "method": "bdev_lvol_get_lvstores", 00:20:55.651 "req_id": 1 00:20:55.651 } 00:20:55.651 Got JSON-RPC error response 00:20:55.651 response: 00:20:55.651 { 00:20:55.651 "code": -19, 00:20:55.651 "message": "No such device" 00:20:55.651 } 00:20:55.651 21:34:16 -- common/autotest_common.sh@643 -- # es=1 00:20:55.651 21:34:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:55.651 21:34:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:55.651 21:34:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:55.651 21:34:16 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:55.909 aio_bdev 00:20:55.909 21:34:16 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9df4331d-8273-4818-8615-190fee57d488 00:20:55.909 21:34:16 -- common/autotest_common.sh@887 -- # local bdev_name=9df4331d-8273-4818-8615-190fee57d488 00:20:55.909 21:34:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:55.909 21:34:16 -- common/autotest_common.sh@889 -- # local i 00:20:55.909 21:34:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:55.909 21:34:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:55.909 21:34:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:56.167 21:34:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9df4331d-8273-4818-8615-190fee57d488 -t 2000 00:20:56.425 [ 00:20:56.425 { 00:20:56.425 "name": "9df4331d-8273-4818-8615-190fee57d488", 00:20:56.425 "aliases": [ 00:20:56.425 "lvs/lvol" 00:20:56.425 ], 00:20:56.425 "product_name": "Logical Volume", 00:20:56.425 "block_size": 4096, 00:20:56.425 "num_blocks": 38912, 00:20:56.425 "uuid": "9df4331d-8273-4818-8615-190fee57d488", 00:20:56.425 "assigned_rate_limits": { 00:20:56.425 "rw_ios_per_sec": 0, 00:20:56.425 "rw_mbytes_per_sec": 0, 00:20:56.425 "r_mbytes_per_sec": 0, 00:20:56.425 "w_mbytes_per_sec": 0 00:20:56.425 }, 00:20:56.425 "claimed": false, 00:20:56.425 "zoned": false, 00:20:56.425 "supported_io_types": { 00:20:56.425 "read": true, 00:20:56.425 "write": true, 00:20:56.425 "unmap": true, 00:20:56.425 "write_zeroes": true, 00:20:56.425 "flush": false, 00:20:56.425 "reset": true, 00:20:56.425 "compare": false, 00:20:56.425 "compare_and_write": false, 00:20:56.425 "abort": false, 00:20:56.425 "nvme_admin": false, 00:20:56.425 "nvme_io": false 00:20:56.425 }, 00:20:56.425 "driver_specific": { 00:20:56.425 "lvol": { 00:20:56.425 "lvol_store_uuid": "0b5761d4-0cea-4be7-9cee-38a866f92077", 00:20:56.425 "base_bdev": "aio_bdev", 00:20:56.425 "thin_provision": false, 00:20:56.425 "snapshot": false, 00:20:56.425 "clone": false, 00:20:56.425 "esnap_clone": false 00:20:56.425 } 00:20:56.425 } 00:20:56.425 } 00:20:56.425 ] 00:20:56.425 21:34:17 -- common/autotest_common.sh@895 -- # return 0 00:20:56.425 21:34:17 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:56.425 21:34:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:20:56.683 21:34:17 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:20:56.683 21:34:17 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:56.683 21:34:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:20:56.941 21:34:17 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:20:56.941 21:34:17 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9df4331d-8273-4818-8615-190fee57d488 00:20:57.199 21:34:17 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b5761d4-0cea-4be7-9cee-38a866f92077 00:20:57.456 21:34:18 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:57.713 21:34:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:57.970 00:20:57.970 real 0m18.003s 00:20:57.971 user 0m16.878s 00:20:57.971 sys 0m2.608s 00:20:57.971 21:34:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.971 ************************************ 00:20:57.971 END TEST lvs_grow_clean 00:20:57.971 ************************************ 00:20:57.971 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:58.229 21:34:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:58.229 21:34:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:58.229 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 ************************************ 00:20:58.229 START TEST lvs_grow_dirty 00:20:58.229 ************************************ 00:20:58.229 21:34:18 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:58.229 21:34:18 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:58.487 21:34:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:58.487 21:34:19 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:58.745 21:34:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=aeb9e372-8924-4eb9-8179-3e288154a247 00:20:58.745 21:34:19 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:20:58.745 21:34:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:59.004 21:34:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:59.004 21:34:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:59.004 21:34:19 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aeb9e372-8924-4eb9-8179-3e288154a247 lvol 150 00:20:59.262 21:34:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=39c74011-47b4-4771-b008-ebbdc1a45a92 00:20:59.262 21:34:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:59.262 21:34:19 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:59.262 [2024-07-11 21:34:20.186355] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:59.262 [2024-07-11 21:34:20.186452] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:59.262 true 00:20:59.262 21:34:20 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:20:59.262 21:34:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:59.520 21:34:20 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:59.520 21:34:20 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:59.778 21:34:20 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39c74011-47b4-4771-b008-ebbdc1a45a92 00:21:00.036 21:34:20 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:00.294 21:34:21 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:00.552 21:34:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73299 00:21:00.552 21:34:21 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:00.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.552 21:34:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.552 21:34:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73299 /var/tmp/bdevperf.sock 00:21:00.552 21:34:21 -- common/autotest_common.sh@819 -- # '[' -z 73299 ']' 00:21:00.552 21:34:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.552 21:34:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.552 21:34:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.552 21:34:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.552 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:21:00.552 [2024-07-11 21:34:21.416161] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:00.552 [2024-07-11 21:34:21.416569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73299 ] 00:21:00.812 [2024-07-11 21:34:21.558434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.812 [2024-07-11 21:34:21.668983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.397 21:34:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:01.397 21:34:22 -- common/autotest_common.sh@852 -- # return 0 00:21:01.397 21:34:22 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:01.963 Nvme0n1 00:21:01.963 21:34:22 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:01.963 [ 00:21:01.963 { 00:21:01.963 "name": "Nvme0n1", 00:21:01.964 "aliases": [ 00:21:01.964 "39c74011-47b4-4771-b008-ebbdc1a45a92" 00:21:01.964 ], 00:21:01.964 "product_name": "NVMe disk", 00:21:01.964 "block_size": 4096, 00:21:01.964 "num_blocks": 38912, 00:21:01.964 "uuid": "39c74011-47b4-4771-b008-ebbdc1a45a92", 00:21:01.964 "assigned_rate_limits": { 00:21:01.964 "rw_ios_per_sec": 0, 00:21:01.964 "rw_mbytes_per_sec": 0, 00:21:01.964 "r_mbytes_per_sec": 0, 00:21:01.964 "w_mbytes_per_sec": 0 00:21:01.964 }, 00:21:01.964 "claimed": false, 00:21:01.964 "zoned": false, 00:21:01.964 "supported_io_types": { 00:21:01.964 "read": true, 00:21:01.964 "write": true, 00:21:01.964 "unmap": true, 00:21:01.964 "write_zeroes": true, 00:21:01.964 "flush": true, 00:21:01.964 "reset": true, 00:21:01.964 "compare": true, 00:21:01.964 "compare_and_write": true, 00:21:01.964 "abort": true, 00:21:01.964 "nvme_admin": true, 00:21:01.964 "nvme_io": true 00:21:01.964 }, 00:21:01.964 "driver_specific": { 00:21:01.964 "nvme": [ 00:21:01.964 { 00:21:01.964 "trid": { 00:21:01.964 "trtype": "TCP", 00:21:01.964 "adrfam": "IPv4", 00:21:01.964 "traddr": "10.0.0.2", 00:21:01.964 "trsvcid": "4420", 00:21:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:01.964 }, 00:21:01.964 "ctrlr_data": { 00:21:01.964 "cntlid": 1, 00:21:01.964 "vendor_id": "0x8086", 00:21:01.964 "model_number": "SPDK bdev Controller", 00:21:01.964 "serial_number": "SPDK0", 00:21:01.964 "firmware_revision": "24.01.1", 00:21:01.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:01.964 "oacs": { 00:21:01.964 "security": 0, 00:21:01.964 "format": 0, 00:21:01.964 "firmware": 0, 00:21:01.964 "ns_manage": 0 00:21:01.964 }, 00:21:01.964 "multi_ctrlr": true, 00:21:01.964 "ana_reporting": false 00:21:01.964 }, 00:21:01.964 "vs": { 00:21:01.964 "nvme_version": "1.3" 00:21:01.964 }, 00:21:01.964 "ns_data": { 00:21:01.964 "id": 1, 00:21:01.964 "can_share": true 00:21:01.964 } 00:21:01.964 } 00:21:01.964 ], 00:21:01.964 "mp_policy": "active_passive" 00:21:01.964 } 00:21:01.964 } 00:21:01.964 ] 00:21:01.964 21:34:22 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.964 21:34:22 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73323 00:21:01.964 21:34:22 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:02.221 Running I/O for 10 seconds... 00:21:03.155 Latency(us) 00:21:03.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:03.155 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:21:03.155 =================================================================================================================== 00:21:03.155 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:21:03.155 00:21:04.089 21:34:24 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:04.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:04.089 Nvme0n1 : 2.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:21:04.089 =================================================================================================================== 00:21:04.089 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:21:04.089 00:21:04.347 true 00:21:04.347 21:34:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:04.347 21:34:25 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:04.605 21:34:25 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:04.605 21:34:25 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:04.605 21:34:25 -- target/nvmf_lvs_grow.sh@65 -- # wait 73323 00:21:05.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:05.223 Nvme0n1 : 3.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:21:05.223 =================================================================================================================== 00:21:05.223 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:21:05.223 00:21:06.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:06.157 Nvme0n1 : 4.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:21:06.157 =================================================================================================================== 00:21:06.157 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:21:06.157 00:21:07.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:07.092 Nvme0n1 : 5.00 7569.20 29.57 0.00 0.00 0.00 0.00 0.00 00:21:07.092 =================================================================================================================== 00:21:07.092 Total : 7569.20 29.57 0.00 0.00 0.00 0.00 0.00 00:21:07.092 00:21:08.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:08.028 Nvme0n1 : 6.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:21:08.028 =================================================================================================================== 00:21:08.028 Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:21:08.028 00:21:09.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:09.403 Nvme0n1 : 7.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:21:09.403 =================================================================================================================== 00:21:09.403 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:21:09.403 00:21:10.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:10.336 Nvme0n1 : 8.00 7225.38 28.22 0.00 0.00 0.00 0.00 0.00 00:21:10.336 =================================================================================================================== 00:21:10.336 Total : 7225.38 28.22 0.00 0.00 0.00 0.00 0.00 00:21:10.336 00:21:11.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:11.271 Nvme0n1 : 9.00 7198.67 28.12 0.00 0.00 0.00 0.00 0.00 00:21:11.271 =================================================================================================================== 00:21:11.271 Total : 7198.67 28.12 0.00 0.00 0.00 0.00 0.00 00:21:11.271 00:21:12.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:12.215 Nvme0n1 : 10.00 7177.30 28.04 0.00 0.00 0.00 0.00 0.00 00:21:12.215 =================================================================================================================== 00:21:12.215 Total : 7177.30 28.04 0.00 0.00 0.00 0.00 0.00 00:21:12.215 00:21:12.215 00:21:12.215 Latency(us) 00:21:12.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:12.215 Nvme0n1 : 10.01 7181.28 28.05 0.00 0.00 17819.66 6821.70 255471.24 00:21:12.215 =================================================================================================================== 00:21:12.215 Total : 7181.28 28.05 0.00 0.00 17819.66 6821.70 255471.24 00:21:12.215 0 00:21:12.215 21:34:32 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73299 00:21:12.215 21:34:32 -- common/autotest_common.sh@926 -- # '[' -z 73299 ']' 00:21:12.215 21:34:32 -- common/autotest_common.sh@930 -- # kill -0 73299 00:21:12.215 21:34:32 -- common/autotest_common.sh@931 -- # uname 00:21:12.215 21:34:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:12.215 21:34:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73299 00:21:12.215 killing process with pid 73299 00:21:12.215 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.215 00:21:12.215 Latency(us) 00:21:12.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.215 =================================================================================================================== 00:21:12.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.215 21:34:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:12.215 21:34:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:12.215 21:34:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73299' 00:21:12.215 21:34:33 -- common/autotest_common.sh@945 -- # kill 73299 00:21:12.215 21:34:33 -- common/autotest_common.sh@950 -- # wait 73299 00:21:12.480 21:34:33 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:12.738 21:34:33 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:12.738 21:34:33 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:12.995 21:34:33 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:12.996 21:34:33 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:21:12.996 21:34:33 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72954 00:21:12.996 21:34:33 -- target/nvmf_lvs_grow.sh@74 -- # wait 72954 00:21:12.996 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72954 Killed "${NVMF_APP[@]}" "$@" 00:21:12.996 21:34:33 -- target/nvmf_lvs_grow.sh@74 -- # true 00:21:12.996 21:34:33 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:21:12.996 21:34:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:12.996 21:34:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:12.996 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:21:12.996 21:34:33 -- nvmf/common.sh@469 -- # nvmfpid=73449 00:21:12.996 21:34:33 -- nvmf/common.sh@470 -- # waitforlisten 73449 00:21:12.996 21:34:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:12.996 21:34:33 -- common/autotest_common.sh@819 -- # '[' -z 73449 ']' 00:21:12.996 21:34:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.996 21:34:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:12.996 21:34:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.996 21:34:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:12.996 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:21:12.996 [2024-07-11 21:34:33.782470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:12.996 [2024-07-11 21:34:33.782759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.996 [2024-07-11 21:34:33.922813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.252 [2024-07-11 21:34:34.006068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:13.252 [2024-07-11 21:34:34.006232] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.252 [2024-07-11 21:34:34.006247] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.252 [2024-07-11 21:34:34.006257] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.252 [2024-07-11 21:34:34.006283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.818 21:34:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:13.818 21:34:34 -- common/autotest_common.sh@852 -- # return 0 00:21:13.818 21:34:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:13.818 21:34:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:13.818 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:21:14.076 21:34:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.076 21:34:34 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:14.377 [2024-07-11 21:34:35.031390] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:14.377 [2024-07-11 21:34:35.031874] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:14.377 [2024-07-11 21:34:35.032615] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:14.377 21:34:35 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:21:14.377 21:34:35 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 39c74011-47b4-4771-b008-ebbdc1a45a92 00:21:14.377 21:34:35 -- common/autotest_common.sh@887 -- # local bdev_name=39c74011-47b4-4771-b008-ebbdc1a45a92 00:21:14.377 21:34:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:14.377 21:34:35 -- common/autotest_common.sh@889 -- # local i 00:21:14.377 21:34:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:14.377 21:34:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:14.377 21:34:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:14.673 21:34:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 39c74011-47b4-4771-b008-ebbdc1a45a92 -t 2000 00:21:14.673 [ 00:21:14.673 { 00:21:14.673 "name": "39c74011-47b4-4771-b008-ebbdc1a45a92", 00:21:14.673 "aliases": [ 00:21:14.673 "lvs/lvol" 00:21:14.673 ], 00:21:14.673 "product_name": "Logical Volume", 00:21:14.673 "block_size": 4096, 00:21:14.673 "num_blocks": 38912, 00:21:14.673 "uuid": "39c74011-47b4-4771-b008-ebbdc1a45a92", 00:21:14.673 "assigned_rate_limits": { 00:21:14.673 "rw_ios_per_sec": 0, 00:21:14.673 "rw_mbytes_per_sec": 0, 00:21:14.673 "r_mbytes_per_sec": 0, 00:21:14.673 "w_mbytes_per_sec": 0 00:21:14.673 }, 00:21:14.673 "claimed": false, 00:21:14.673 "zoned": false, 00:21:14.673 "supported_io_types": { 00:21:14.673 "read": true, 00:21:14.673 "write": true, 00:21:14.673 "unmap": true, 00:21:14.673 "write_zeroes": true, 00:21:14.673 "flush": false, 00:21:14.673 "reset": true, 00:21:14.673 "compare": false, 00:21:14.673 "compare_and_write": false, 00:21:14.673 "abort": false, 00:21:14.673 "nvme_admin": false, 00:21:14.673 "nvme_io": false 00:21:14.673 }, 00:21:14.673 "driver_specific": { 00:21:14.673 "lvol": { 00:21:14.673 "lvol_store_uuid": "aeb9e372-8924-4eb9-8179-3e288154a247", 00:21:14.673 "base_bdev": "aio_bdev", 00:21:14.673 "thin_provision": false, 00:21:14.673 "snapshot": false, 00:21:14.673 "clone": false, 00:21:14.673 "esnap_clone": false 00:21:14.673 } 00:21:14.673 } 00:21:14.673 } 00:21:14.673 ] 00:21:14.673 21:34:35 -- common/autotest_common.sh@895 -- # return 0 00:21:14.673 21:34:35 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:14.673 21:34:35 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:21:15.239 21:34:35 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:21:15.240 21:34:35 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:15.240 21:34:35 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:21:15.240 21:34:36 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:21:15.240 21:34:36 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:15.497 [2024-07-11 21:34:36.320872] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:15.497 21:34:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:15.497 21:34:36 -- common/autotest_common.sh@640 -- # local es=0 00:21:15.497 21:34:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:15.497 21:34:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:15.497 21:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:15.497 21:34:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:15.497 21:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:15.497 21:34:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:15.497 21:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:15.497 21:34:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:15.497 21:34:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:15.497 21:34:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:15.754 request: 00:21:15.754 { 00:21:15.754 "uuid": "aeb9e372-8924-4eb9-8179-3e288154a247", 00:21:15.754 "method": "bdev_lvol_get_lvstores", 00:21:15.754 "req_id": 1 00:21:15.754 } 00:21:15.754 Got JSON-RPC error response 00:21:15.754 response: 00:21:15.754 { 00:21:15.754 "code": -19, 00:21:15.754 "message": "No such device" 00:21:15.754 } 00:21:15.754 21:34:36 -- common/autotest_common.sh@643 -- # es=1 00:21:15.754 21:34:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:15.754 21:34:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:15.754 21:34:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:15.754 21:34:36 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:16.012 aio_bdev 00:21:16.012 21:34:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 39c74011-47b4-4771-b008-ebbdc1a45a92 00:21:16.012 21:34:36 -- common/autotest_common.sh@887 -- # local bdev_name=39c74011-47b4-4771-b008-ebbdc1a45a92 00:21:16.012 21:34:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:16.012 21:34:36 -- common/autotest_common.sh@889 -- # local i 00:21:16.012 21:34:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:16.012 21:34:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:16.012 21:34:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:16.270 21:34:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 39c74011-47b4-4771-b008-ebbdc1a45a92 -t 2000 00:21:16.527 [ 00:21:16.527 { 00:21:16.527 "name": "39c74011-47b4-4771-b008-ebbdc1a45a92", 00:21:16.527 "aliases": [ 00:21:16.527 "lvs/lvol" 00:21:16.527 ], 00:21:16.527 "product_name": "Logical Volume", 00:21:16.527 "block_size": 4096, 00:21:16.527 "num_blocks": 38912, 00:21:16.527 "uuid": "39c74011-47b4-4771-b008-ebbdc1a45a92", 00:21:16.527 "assigned_rate_limits": { 00:21:16.527 "rw_ios_per_sec": 0, 00:21:16.527 "rw_mbytes_per_sec": 0, 00:21:16.527 "r_mbytes_per_sec": 0, 00:21:16.527 "w_mbytes_per_sec": 0 00:21:16.527 }, 00:21:16.527 "claimed": false, 00:21:16.527 "zoned": false, 00:21:16.527 "supported_io_types": { 00:21:16.527 "read": true, 00:21:16.527 "write": true, 00:21:16.527 "unmap": true, 00:21:16.528 "write_zeroes": true, 00:21:16.528 "flush": false, 00:21:16.528 "reset": true, 00:21:16.528 "compare": false, 00:21:16.528 "compare_and_write": false, 00:21:16.528 "abort": false, 00:21:16.528 "nvme_admin": false, 00:21:16.528 "nvme_io": false 00:21:16.528 }, 00:21:16.528 "driver_specific": { 00:21:16.528 "lvol": { 00:21:16.528 "lvol_store_uuid": "aeb9e372-8924-4eb9-8179-3e288154a247", 00:21:16.528 "base_bdev": "aio_bdev", 00:21:16.528 "thin_provision": false, 00:21:16.528 "snapshot": false, 00:21:16.528 "clone": false, 00:21:16.528 "esnap_clone": false 00:21:16.528 } 00:21:16.528 } 00:21:16.528 } 00:21:16.528 ] 00:21:16.528 21:34:37 -- common/autotest_common.sh@895 -- # return 0 00:21:16.528 21:34:37 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:16.528 21:34:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:16.786 21:34:37 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:16.786 21:34:37 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:16.786 21:34:37 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:17.045 21:34:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:17.045 21:34:37 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 39c74011-47b4-4771-b008-ebbdc1a45a92 00:21:17.304 21:34:38 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aeb9e372-8924-4eb9-8179-3e288154a247 00:21:17.562 21:34:38 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:17.820 21:34:38 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:18.078 ************************************ 00:21:18.078 END TEST lvs_grow_dirty 00:21:18.078 ************************************ 00:21:18.078 00:21:18.078 real 0m19.913s 00:21:18.078 user 0m41.754s 00:21:18.078 sys 0m8.066s 00:21:18.078 21:34:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.078 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:21:18.078 21:34:38 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:18.078 21:34:38 -- common/autotest_common.sh@796 -- # type=--id 00:21:18.078 21:34:38 -- common/autotest_common.sh@797 -- # id=0 00:21:18.078 21:34:38 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:18.079 21:34:38 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:18.079 21:34:38 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:18.079 21:34:38 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:18.079 21:34:38 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:18.079 21:34:38 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:18.079 nvmf_trace.0 00:21:18.079 21:34:38 -- common/autotest_common.sh@811 -- # return 0 00:21:18.079 21:34:38 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:18.079 21:34:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:18.079 21:34:38 -- nvmf/common.sh@116 -- # sync 00:21:18.337 21:34:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:18.337 21:34:39 -- nvmf/common.sh@119 -- # set +e 00:21:18.337 21:34:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:18.337 21:34:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:18.337 rmmod nvme_tcp 00:21:18.337 rmmod nvme_fabrics 00:21:18.337 rmmod nvme_keyring 00:21:18.337 21:34:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:18.337 21:34:39 -- nvmf/common.sh@123 -- # set -e 00:21:18.337 21:34:39 -- nvmf/common.sh@124 -- # return 0 00:21:18.337 21:34:39 -- nvmf/common.sh@477 -- # '[' -n 73449 ']' 00:21:18.337 21:34:39 -- nvmf/common.sh@478 -- # killprocess 73449 00:21:18.337 21:34:39 -- common/autotest_common.sh@926 -- # '[' -z 73449 ']' 00:21:18.337 21:34:39 -- common/autotest_common.sh@930 -- # kill -0 73449 00:21:18.337 21:34:39 -- common/autotest_common.sh@931 -- # uname 00:21:18.337 21:34:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.337 21:34:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73449 00:21:18.337 21:34:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:18.337 21:34:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:18.337 killing process with pid 73449 00:21:18.337 21:34:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73449' 00:21:18.337 21:34:39 -- common/autotest_common.sh@945 -- # kill 73449 00:21:18.337 21:34:39 -- common/autotest_common.sh@950 -- # wait 73449 00:21:18.654 21:34:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:18.654 21:34:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:18.654 21:34:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:18.654 21:34:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.654 21:34:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:18.654 21:34:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.654 21:34:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.654 21:34:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.654 21:34:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:18.654 ************************************ 00:21:18.654 END TEST nvmf_lvs_grow 00:21:18.654 ************************************ 00:21:18.654 00:21:18.654 real 0m40.378s 00:21:18.654 user 1m4.890s 00:21:18.654 sys 0m11.411s 00:21:18.654 21:34:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.654 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:21:18.654 21:34:39 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:18.654 21:34:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:18.654 21:34:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:18.655 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:21:18.655 ************************************ 00:21:18.655 START TEST nvmf_bdev_io_wait 00:21:18.655 ************************************ 00:21:18.655 21:34:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:18.928 * Looking for test storage... 00:21:18.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:18.928 21:34:39 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:18.928 21:34:39 -- nvmf/common.sh@7 -- # uname -s 00:21:18.928 21:34:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.928 21:34:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.928 21:34:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.928 21:34:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.928 21:34:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.928 21:34:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.928 21:34:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.928 21:34:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.928 21:34:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.928 21:34:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.928 21:34:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:18.928 21:34:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:18.928 21:34:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.928 21:34:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.928 21:34:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:18.928 21:34:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:18.928 21:34:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.928 21:34:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.928 21:34:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.928 21:34:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 21:34:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 21:34:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 21:34:39 -- paths/export.sh@5 -- # export PATH 00:21:18.928 21:34:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 21:34:39 -- nvmf/common.sh@46 -- # : 0 00:21:18.928 21:34:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:18.928 21:34:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:18.928 21:34:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:18.928 21:34:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.928 21:34:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.928 21:34:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:18.928 21:34:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:18.929 21:34:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:18.929 21:34:39 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:18.929 21:34:39 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:18.929 21:34:39 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:18.929 21:34:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:18.929 21:34:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.929 21:34:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:18.929 21:34:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:18.929 21:34:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:18.929 21:34:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.929 21:34:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.929 21:34:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.929 21:34:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:18.929 21:34:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:18.929 21:34:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:18.929 21:34:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:18.929 21:34:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:18.929 21:34:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:18.929 21:34:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.929 21:34:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.929 21:34:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:18.929 21:34:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:18.929 21:34:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:18.929 21:34:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:18.929 21:34:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:18.929 21:34:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.929 21:34:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:18.929 21:34:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:18.929 21:34:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:18.929 21:34:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:18.929 21:34:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:18.929 21:34:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:18.929 Cannot find device "nvmf_tgt_br" 00:21:18.929 21:34:39 -- nvmf/common.sh@154 -- # true 00:21:18.929 21:34:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:18.929 Cannot find device "nvmf_tgt_br2" 00:21:18.929 21:34:39 -- nvmf/common.sh@155 -- # true 00:21:18.929 21:34:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:18.929 21:34:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:18.929 Cannot find device "nvmf_tgt_br" 00:21:18.929 21:34:39 -- nvmf/common.sh@157 -- # true 00:21:18.929 21:34:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:18.929 Cannot find device "nvmf_tgt_br2" 00:21:18.929 21:34:39 -- nvmf/common.sh@158 -- # true 00:21:18.929 21:34:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:18.929 21:34:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:18.929 21:34:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.929 21:34:39 -- nvmf/common.sh@161 -- # true 00:21:18.929 21:34:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.929 21:34:39 -- nvmf/common.sh@162 -- # true 00:21:18.929 21:34:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:18.929 21:34:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:18.929 21:34:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:18.929 21:34:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:18.929 21:34:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:18.929 21:34:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:18.929 21:34:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:18.929 21:34:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:18.929 21:34:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:19.187 21:34:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:19.187 21:34:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:19.187 21:34:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:19.187 21:34:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:19.187 21:34:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.187 21:34:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.188 21:34:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.188 21:34:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:19.188 21:34:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:19.188 21:34:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.188 21:34:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.188 21:34:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.188 21:34:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.188 21:34:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.188 21:34:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:19.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:21:19.188 00:21:19.188 --- 10.0.0.2 ping statistics --- 00:21:19.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.188 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:19.188 21:34:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:19.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:21:19.188 00:21:19.188 --- 10.0.0.3 ping statistics --- 00:21:19.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.188 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:19.188 21:34:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:19.188 00:21:19.188 --- 10.0.0.1 ping statistics --- 00:21:19.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.188 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:19.188 21:34:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.188 21:34:40 -- nvmf/common.sh@421 -- # return 0 00:21:19.188 21:34:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:19.188 21:34:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.188 21:34:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:19.188 21:34:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:19.188 21:34:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.188 21:34:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:19.188 21:34:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:19.188 21:34:40 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.188 21:34:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:19.188 21:34:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:19.188 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:21:19.188 21:34:40 -- nvmf/common.sh@469 -- # nvmfpid=73766 00:21:19.188 21:34:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.188 21:34:40 -- nvmf/common.sh@470 -- # waitforlisten 73766 00:21:19.188 21:34:40 -- common/autotest_common.sh@819 -- # '[' -z 73766 ']' 00:21:19.188 21:34:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.188 21:34:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:19.188 21:34:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.188 21:34:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:19.188 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:21:19.188 [2024-07-11 21:34:40.080558] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:19.188 [2024-07-11 21:34:40.080635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.446 [2024-07-11 21:34:40.220687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.446 [2024-07-11 21:34:40.315718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:19.446 [2024-07-11 21:34:40.315884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.446 [2024-07-11 21:34:40.315900] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.446 [2024-07-11 21:34:40.315912] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.446 [2024-07-11 21:34:40.316088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.446 [2024-07-11 21:34:40.316172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.446 [2024-07-11 21:34:40.316318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.446 [2024-07-11 21:34:40.316323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.381 21:34:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:20.381 21:34:41 -- common/autotest_common.sh@852 -- # return 0 00:21:20.381 21:34:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:20.381 21:34:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 21:34:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:20.381 21:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 21:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:20.381 21:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 21:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:20.381 21:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 [2024-07-11 21:34:41.201981] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.381 21:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:20.381 21:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 Malloc0 00:21:20.381 21:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.381 21:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 21:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:20.381 21:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 21:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.381 21:34:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.381 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:21:20.381 [2024-07-11 21:34:41.270267] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.381 21:34:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73801 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@30 -- # READ_PID=73803 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # config=() 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # local subsystem config 00:21:20.381 21:34:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:20.381 { 00:21:20.381 "params": { 00:21:20.381 "name": "Nvme$subsystem", 00:21:20.381 "trtype": "$TEST_TRANSPORT", 00:21:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.381 "adrfam": "ipv4", 00:21:20.381 "trsvcid": "$NVMF_PORT", 00:21:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.381 "hdgst": ${hdgst:-false}, 00:21:20.381 "ddgst": ${ddgst:-false} 00:21:20.381 }, 00:21:20.381 "method": "bdev_nvme_attach_controller" 00:21:20.381 } 00:21:20.381 EOF 00:21:20.381 )") 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73805 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # config=() 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # local subsystem config 00:21:20.381 21:34:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # cat 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:20.381 { 00:21:20.381 "params": { 00:21:20.381 "name": "Nvme$subsystem", 00:21:20.381 "trtype": "$TEST_TRANSPORT", 00:21:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.381 "adrfam": "ipv4", 00:21:20.381 "trsvcid": "$NVMF_PORT", 00:21:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.381 "hdgst": ${hdgst:-false}, 00:21:20.381 "ddgst": ${ddgst:-false} 00:21:20.381 }, 00:21:20.381 "method": "bdev_nvme_attach_controller" 00:21:20.381 } 00:21:20.381 EOF 00:21:20.381 )") 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73808 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@35 -- # sync 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # config=() 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # local subsystem config 00:21:20.381 21:34:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:20.381 { 00:21:20.381 "params": { 00:21:20.381 "name": "Nvme$subsystem", 00:21:20.381 "trtype": "$TEST_TRANSPORT", 00:21:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.381 "adrfam": "ipv4", 00:21:20.381 "trsvcid": "$NVMF_PORT", 00:21:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.381 "hdgst": ${hdgst:-false}, 00:21:20.381 "ddgst": ${ddgst:-false} 00:21:20.381 }, 00:21:20.381 "method": "bdev_nvme_attach_controller" 00:21:20.381 } 00:21:20.381 EOF 00:21:20.381 )") 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # cat 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # cat 00:21:20.381 21:34:41 -- nvmf/common.sh@544 -- # jq . 00:21:20.381 21:34:41 -- nvmf/common.sh@544 -- # jq . 00:21:20.381 21:34:41 -- nvmf/common.sh@545 -- # IFS=, 00:21:20.381 21:34:41 -- nvmf/common.sh@545 -- # IFS=, 00:21:20.381 21:34:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:20.381 "params": { 00:21:20.381 "name": "Nvme1", 00:21:20.381 "trtype": "tcp", 00:21:20.381 "traddr": "10.0.0.2", 00:21:20.381 "adrfam": "ipv4", 00:21:20.381 "trsvcid": "4420", 00:21:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.381 "hdgst": false, 00:21:20.381 "ddgst": false 00:21:20.381 }, 00:21:20.381 "method": "bdev_nvme_attach_controller" 00:21:20.381 }' 00:21:20.381 21:34:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:20.381 "params": { 00:21:20.381 "name": "Nvme1", 00:21:20.381 "trtype": "tcp", 00:21:20.381 "traddr": "10.0.0.2", 00:21:20.381 "adrfam": "ipv4", 00:21:20.381 "trsvcid": "4420", 00:21:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.381 "hdgst": false, 00:21:20.381 "ddgst": false 00:21:20.381 }, 00:21:20.381 "method": "bdev_nvme_attach_controller" 00:21:20.381 }' 00:21:20.381 21:34:41 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # config=() 00:21:20.381 21:34:41 -- nvmf/common.sh@520 -- # local subsystem config 00:21:20.381 21:34:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:20.381 { 00:21:20.381 "params": { 00:21:20.381 "name": "Nvme$subsystem", 00:21:20.381 "trtype": "$TEST_TRANSPORT", 00:21:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.381 "adrfam": "ipv4", 00:21:20.381 "trsvcid": "$NVMF_PORT", 00:21:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.381 "hdgst": ${hdgst:-false}, 00:21:20.381 "ddgst": ${ddgst:-false} 00:21:20.381 }, 00:21:20.381 "method": "bdev_nvme_attach_controller" 00:21:20.381 } 00:21:20.381 EOF 00:21:20.381 )") 00:21:20.381 21:34:41 -- nvmf/common.sh@544 -- # jq . 00:21:20.381 21:34:41 -- nvmf/common.sh@542 -- # cat 00:21:20.381 21:34:41 -- nvmf/common.sh@545 -- # IFS=, 00:21:20.381 21:34:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:20.381 "params": { 00:21:20.381 "name": "Nvme1", 00:21:20.381 "trtype": "tcp", 00:21:20.381 "traddr": "10.0.0.2", 00:21:20.381 "adrfam": "ipv4", 00:21:20.381 "trsvcid": "4420", 00:21:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.382 "hdgst": false, 00:21:20.382 "ddgst": false 00:21:20.382 }, 00:21:20.382 "method": "bdev_nvme_attach_controller" 00:21:20.382 }' 00:21:20.382 21:34:41 -- nvmf/common.sh@544 -- # jq . 00:21:20.382 21:34:41 -- nvmf/common.sh@545 -- # IFS=, 00:21:20.382 21:34:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:20.382 "params": { 00:21:20.382 "name": "Nvme1", 00:21:20.382 "trtype": "tcp", 00:21:20.382 "traddr": "10.0.0.2", 00:21:20.382 "adrfam": "ipv4", 00:21:20.382 "trsvcid": "4420", 00:21:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.382 "hdgst": false, 00:21:20.382 "ddgst": false 00:21:20.382 }, 00:21:20.382 "method": "bdev_nvme_attach_controller" 00:21:20.382 }' 00:21:20.382 [2024-07-11 21:34:41.325805] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:20.382 [2024-07-11 21:34:41.325888] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:21:20.640 [2024-07-11 21:34:41.334071] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:20.640 [2024-07-11 21:34:41.334176] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:20.640 21:34:41 -- target/bdev_io_wait.sh@37 -- # wait 73801 00:21:20.640 [2024-07-11 21:34:41.360248] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:20.640 [2024-07-11 21:34:41.360556] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:21:20.640 [2024-07-11 21:34:41.367617] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:20.640 [2024-07-11 21:34:41.367691] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:21:20.640 [2024-07-11 21:34:41.531857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.899 [2024-07-11 21:34:41.601398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:20.899 [2024-07-11 21:34:41.609708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.899 [2024-07-11 21:34:41.683121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.899 [2024-07-11 21:34:41.685150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:20.899 [2024-07-11 21:34:41.763291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:20.899 [2024-07-11 21:34:41.769446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.899 Running I/O for 1 seconds... 00:21:20.899 Running I/O for 1 seconds... 00:21:20.899 [2024-07-11 21:34:41.841078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:21.157 Running I/O for 1 seconds... 00:21:21.157 Running I/O for 1 seconds... 00:21:22.092 00:21:22.092 Latency(us) 00:21:22.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.092 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:21:22.092 Nvme1n1 : 1.00 162729.51 635.66 0.00 0.00 783.79 331.40 1846.92 00:21:22.092 =================================================================================================================== 00:21:22.092 Total : 162729.51 635.66 0.00 0.00 783.79 331.40 1846.92 00:21:22.092 00:21:22.092 Latency(us) 00:21:22.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.092 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:21:22.092 Nvme1n1 : 1.01 10421.83 40.71 0.00 0.00 12228.66 7208.96 20852.36 00:21:22.092 =================================================================================================================== 00:21:22.092 Total : 10421.83 40.71 0.00 0.00 12228.66 7208.96 20852.36 00:21:22.092 00:21:22.092 Latency(us) 00:21:22.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.092 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:21:22.092 Nvme1n1 : 1.01 7910.17 30.90 0.00 0.00 16098.43 9592.09 26691.03 00:21:22.092 =================================================================================================================== 00:21:22.092 Total : 7910.17 30.90 0.00 0.00 16098.43 9592.09 26691.03 00:21:22.092 00:21:22.092 Latency(us) 00:21:22.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.092 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:21:22.092 Nvme1n1 : 1.01 9423.06 36.81 0.00 0.00 13529.18 6940.86 26810.18 00:21:22.092 =================================================================================================================== 00:21:22.092 Total : 9423.06 36.81 0.00 0.00 13529.18 6940.86 26810.18 00:21:22.092 21:34:43 -- target/bdev_io_wait.sh@38 -- # wait 73803 00:21:22.350 21:34:43 -- target/bdev_io_wait.sh@39 -- # wait 73805 00:21:22.350 21:34:43 -- target/bdev_io_wait.sh@40 -- # wait 73808 00:21:22.350 21:34:43 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.350 21:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.350 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:21:22.350 21:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.350 21:34:43 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:21:22.350 21:34:43 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:21:22.350 21:34:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:22.350 21:34:43 -- nvmf/common.sh@116 -- # sync 00:21:22.350 21:34:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:22.350 21:34:43 -- nvmf/common.sh@119 -- # set +e 00:21:22.350 21:34:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:22.350 21:34:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:22.350 rmmod nvme_tcp 00:21:22.350 rmmod nvme_fabrics 00:21:22.350 rmmod nvme_keyring 00:21:22.350 21:34:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:22.608 21:34:43 -- nvmf/common.sh@123 -- # set -e 00:21:22.608 21:34:43 -- nvmf/common.sh@124 -- # return 0 00:21:22.608 21:34:43 -- nvmf/common.sh@477 -- # '[' -n 73766 ']' 00:21:22.608 21:34:43 -- nvmf/common.sh@478 -- # killprocess 73766 00:21:22.608 21:34:43 -- common/autotest_common.sh@926 -- # '[' -z 73766 ']' 00:21:22.608 21:34:43 -- common/autotest_common.sh@930 -- # kill -0 73766 00:21:22.608 21:34:43 -- common/autotest_common.sh@931 -- # uname 00:21:22.608 21:34:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:22.608 21:34:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73766 00:21:22.608 21:34:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:22.608 killing process with pid 73766 00:21:22.608 21:34:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:22.608 21:34:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73766' 00:21:22.608 21:34:43 -- common/autotest_common.sh@945 -- # kill 73766 00:21:22.608 21:34:43 -- common/autotest_common.sh@950 -- # wait 73766 00:21:22.608 21:34:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:22.608 21:34:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:22.608 21:34:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:22.608 21:34:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.608 21:34:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:22.608 21:34:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.608 21:34:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.608 21:34:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.866 21:34:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:22.866 00:21:22.866 real 0m4.053s 00:21:22.866 user 0m17.354s 00:21:22.866 sys 0m2.290s 00:21:22.866 21:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.866 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:21:22.866 ************************************ 00:21:22.866 END TEST nvmf_bdev_io_wait 00:21:22.866 ************************************ 00:21:22.867 21:34:43 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:22.867 21:34:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:22.867 21:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:22.867 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:21:22.867 ************************************ 00:21:22.867 START TEST nvmf_queue_depth 00:21:22.867 ************************************ 00:21:22.867 21:34:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:22.867 * Looking for test storage... 00:21:22.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:22.867 21:34:43 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:22.867 21:34:43 -- nvmf/common.sh@7 -- # uname -s 00:21:22.867 21:34:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.867 21:34:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.867 21:34:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.867 21:34:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.867 21:34:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.867 21:34:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.867 21:34:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.867 21:34:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.867 21:34:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.867 21:34:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.867 21:34:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:22.867 21:34:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:22.867 21:34:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.867 21:34:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.867 21:34:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.867 21:34:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.867 21:34:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.867 21:34:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.867 21:34:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.867 21:34:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.867 21:34:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.867 21:34:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.867 21:34:43 -- paths/export.sh@5 -- # export PATH 00:21:22.867 21:34:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.867 21:34:43 -- nvmf/common.sh@46 -- # : 0 00:21:22.867 21:34:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:22.867 21:34:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:22.867 21:34:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:22.867 21:34:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.867 21:34:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.867 21:34:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:22.867 21:34:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:22.867 21:34:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:22.867 21:34:43 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:21:22.867 21:34:43 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:21:22.867 21:34:43 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.867 21:34:43 -- target/queue_depth.sh@19 -- # nvmftestinit 00:21:22.867 21:34:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:22.867 21:34:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.867 21:34:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:22.867 21:34:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:22.867 21:34:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:22.867 21:34:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.867 21:34:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.867 21:34:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.867 21:34:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:22.867 21:34:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:22.867 21:34:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:22.867 21:34:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:22.867 21:34:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:22.867 21:34:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:22.867 21:34:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.867 21:34:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.867 21:34:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:22.867 21:34:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:22.867 21:34:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:22.867 21:34:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:22.867 21:34:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:22.867 21:34:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.867 21:34:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:22.867 21:34:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:22.867 21:34:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:22.867 21:34:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:22.867 21:34:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:22.867 21:34:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:22.867 Cannot find device "nvmf_tgt_br" 00:21:22.867 21:34:43 -- nvmf/common.sh@154 -- # true 00:21:22.867 21:34:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.867 Cannot find device "nvmf_tgt_br2" 00:21:22.867 21:34:43 -- nvmf/common.sh@155 -- # true 00:21:22.867 21:34:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:22.867 21:34:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:22.867 Cannot find device "nvmf_tgt_br" 00:21:22.867 21:34:43 -- nvmf/common.sh@157 -- # true 00:21:22.867 21:34:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:22.867 Cannot find device "nvmf_tgt_br2" 00:21:22.867 21:34:43 -- nvmf/common.sh@158 -- # true 00:21:22.867 21:34:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:23.127 21:34:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:23.127 21:34:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.127 21:34:43 -- nvmf/common.sh@161 -- # true 00:21:23.127 21:34:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.127 21:34:43 -- nvmf/common.sh@162 -- # true 00:21:23.127 21:34:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:23.127 21:34:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:23.127 21:34:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:23.127 21:34:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:23.127 21:34:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:23.127 21:34:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:23.127 21:34:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:23.127 21:34:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:23.127 21:34:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:23.127 21:34:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:23.127 21:34:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:23.127 21:34:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:23.127 21:34:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:23.127 21:34:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:23.127 21:34:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:23.127 21:34:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:23.127 21:34:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:23.127 21:34:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:23.127 21:34:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:23.127 21:34:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:23.127 21:34:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:23.386 21:34:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:23.386 21:34:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:23.386 21:34:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:23.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:21:23.386 00:21:23.386 --- 10.0.0.2 ping statistics --- 00:21:23.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.386 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:23.386 21:34:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:23.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:23.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:23.386 00:21:23.386 --- 10.0.0.3 ping statistics --- 00:21:23.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.386 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:23.386 21:34:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:23.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:23.386 00:21:23.386 --- 10.0.0.1 ping statistics --- 00:21:23.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.386 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:23.386 21:34:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.386 21:34:44 -- nvmf/common.sh@421 -- # return 0 00:21:23.386 21:34:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:23.386 21:34:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.386 21:34:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:23.386 21:34:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:23.386 21:34:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.386 21:34:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:23.386 21:34:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:23.386 21:34:44 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:21:23.386 21:34:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:23.386 21:34:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:23.386 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:21:23.386 21:34:44 -- nvmf/common.sh@469 -- # nvmfpid=74038 00:21:23.386 21:34:44 -- nvmf/common.sh@470 -- # waitforlisten 74038 00:21:23.386 21:34:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.386 21:34:44 -- common/autotest_common.sh@819 -- # '[' -z 74038 ']' 00:21:23.386 21:34:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.386 21:34:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:23.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.386 21:34:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.386 21:34:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:23.386 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:21:23.386 [2024-07-11 21:34:44.171779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:23.386 [2024-07-11 21:34:44.171871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.386 [2024-07-11 21:34:44.314583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.645 [2024-07-11 21:34:44.412089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:23.645 [2024-07-11 21:34:44.412263] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.645 [2024-07-11 21:34:44.412279] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.645 [2024-07-11 21:34:44.412289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.645 [2024-07-11 21:34:44.412329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.580 21:34:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:24.580 21:34:45 -- common/autotest_common.sh@852 -- # return 0 00:21:24.580 21:34:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:24.580 21:34:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:24.580 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.580 21:34:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.580 21:34:45 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.580 21:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.580 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.580 [2024-07-11 21:34:45.214455] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.580 21:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.580 21:34:45 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:24.580 21:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.580 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.580 Malloc0 00:21:24.580 21:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.580 21:34:45 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.580 21:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.580 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.580 21:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.580 21:34:45 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:24.580 21:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.580 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.580 21:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.580 21:34:45 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.580 21:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.580 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.580 [2024-07-11 21:34:45.281079] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.580 21:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.580 21:34:45 -- target/queue_depth.sh@30 -- # bdevperf_pid=74070 00:21:24.580 21:34:45 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:21:24.580 21:34:45 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.580 21:34:45 -- target/queue_depth.sh@33 -- # waitforlisten 74070 /var/tmp/bdevperf.sock 00:21:24.580 21:34:45 -- common/autotest_common.sh@819 -- # '[' -z 74070 ']' 00:21:24.580 21:34:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.580 21:34:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:24.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.580 21:34:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.580 21:34:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:24.580 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:21:24.580 [2024-07-11 21:34:45.338688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:24.580 [2024-07-11 21:34:45.338788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74070 ] 00:21:24.580 [2024-07-11 21:34:45.484053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.838 [2024-07-11 21:34:45.577131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.772 21:34:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:25.772 21:34:46 -- common/autotest_common.sh@852 -- # return 0 00:21:25.772 21:34:46 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:25.772 21:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:25.772 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:21:25.772 NVMe0n1 00:21:25.772 21:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:25.773 21:34:46 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.773 Running I/O for 10 seconds... 00:21:35.853 00:21:35.853 Latency(us) 00:21:35.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.853 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:21:35.853 Verification LBA range: start 0x0 length 0x4000 00:21:35.853 NVMe0n1 : 10.07 13802.65 53.92 0.00 0.00 73882.70 15966.95 61961.31 00:21:35.853 =================================================================================================================== 00:21:35.853 Total : 13802.65 53.92 0.00 0.00 73882.70 15966.95 61961.31 00:21:35.853 0 00:21:35.853 21:34:56 -- target/queue_depth.sh@39 -- # killprocess 74070 00:21:35.853 21:34:56 -- common/autotest_common.sh@926 -- # '[' -z 74070 ']' 00:21:35.853 21:34:56 -- common/autotest_common.sh@930 -- # kill -0 74070 00:21:35.853 21:34:56 -- common/autotest_common.sh@931 -- # uname 00:21:35.853 21:34:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:35.853 21:34:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74070 00:21:35.853 21:34:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:35.853 21:34:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:35.853 killing process with pid 74070 00:21:35.853 21:34:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74070' 00:21:35.853 21:34:56 -- common/autotest_common.sh@945 -- # kill 74070 00:21:35.853 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.853 00:21:35.853 Latency(us) 00:21:35.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.853 =================================================================================================================== 00:21:35.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.853 21:34:56 -- common/autotest_common.sh@950 -- # wait 74070 00:21:36.111 21:34:56 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:36.111 21:34:56 -- target/queue_depth.sh@43 -- # nvmftestfini 00:21:36.111 21:34:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:36.111 21:34:56 -- nvmf/common.sh@116 -- # sync 00:21:36.111 21:34:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:36.111 21:34:56 -- nvmf/common.sh@119 -- # set +e 00:21:36.111 21:34:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:36.111 21:34:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:36.111 rmmod nvme_tcp 00:21:36.111 rmmod nvme_fabrics 00:21:36.111 rmmod nvme_keyring 00:21:36.111 21:34:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:36.111 21:34:56 -- nvmf/common.sh@123 -- # set -e 00:21:36.111 21:34:56 -- nvmf/common.sh@124 -- # return 0 00:21:36.111 21:34:56 -- nvmf/common.sh@477 -- # '[' -n 74038 ']' 00:21:36.111 21:34:56 -- nvmf/common.sh@478 -- # killprocess 74038 00:21:36.111 21:34:56 -- common/autotest_common.sh@926 -- # '[' -z 74038 ']' 00:21:36.111 21:34:56 -- common/autotest_common.sh@930 -- # kill -0 74038 00:21:36.111 21:34:56 -- common/autotest_common.sh@931 -- # uname 00:21:36.111 21:34:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:36.111 21:34:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74038 00:21:36.111 21:34:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:36.111 21:34:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:36.111 killing process with pid 74038 00:21:36.111 21:34:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74038' 00:21:36.111 21:34:56 -- common/autotest_common.sh@945 -- # kill 74038 00:21:36.111 21:34:57 -- common/autotest_common.sh@950 -- # wait 74038 00:21:36.370 21:34:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:36.370 21:34:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:36.370 21:34:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:36.370 21:34:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.370 21:34:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:36.370 21:34:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.370 21:34:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.370 21:34:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.370 21:34:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:36.370 00:21:36.370 real 0m13.634s 00:21:36.370 user 0m23.689s 00:21:36.370 sys 0m2.065s 00:21:36.370 21:34:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.370 ************************************ 00:21:36.370 END TEST nvmf_queue_depth 00:21:36.370 ************************************ 00:21:36.370 21:34:57 -- common/autotest_common.sh@10 -- # set +x 00:21:36.370 21:34:57 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:36.370 21:34:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:36.370 21:34:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:36.370 21:34:57 -- common/autotest_common.sh@10 -- # set +x 00:21:36.370 ************************************ 00:21:36.370 START TEST nvmf_multipath 00:21:36.370 ************************************ 00:21:36.370 21:34:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:36.630 * Looking for test storage... 00:21:36.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:36.630 21:34:57 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.630 21:34:57 -- nvmf/common.sh@7 -- # uname -s 00:21:36.630 21:34:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.630 21:34:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.630 21:34:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.630 21:34:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.630 21:34:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.630 21:34:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.630 21:34:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.630 21:34:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.630 21:34:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.630 21:34:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.630 21:34:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:36.630 21:34:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:36.630 21:34:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.630 21:34:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.630 21:34:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.630 21:34:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.630 21:34:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.630 21:34:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.630 21:34:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.630 21:34:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.630 21:34:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.630 21:34:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.630 21:34:57 -- paths/export.sh@5 -- # export PATH 00:21:36.630 21:34:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.630 21:34:57 -- nvmf/common.sh@46 -- # : 0 00:21:36.630 21:34:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:36.630 21:34:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:36.630 21:34:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:36.630 21:34:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.630 21:34:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.630 21:34:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:36.630 21:34:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:36.630 21:34:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:36.630 21:34:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.630 21:34:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.630 21:34:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:36.630 21:34:57 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.630 21:34:57 -- target/multipath.sh@43 -- # nvmftestinit 00:21:36.630 21:34:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:36.630 21:34:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.630 21:34:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:36.630 21:34:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:36.630 21:34:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:36.630 21:34:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.630 21:34:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.630 21:34:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.630 21:34:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:36.630 21:34:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:36.630 21:34:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:36.630 21:34:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:36.630 21:34:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:36.630 21:34:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:36.630 21:34:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.630 21:34:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.630 21:34:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:36.630 21:34:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:36.630 21:34:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.630 21:34:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.630 21:34:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.630 21:34:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.630 21:34:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.630 21:34:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.630 21:34:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.630 21:34:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.630 21:34:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:36.630 21:34:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:36.630 Cannot find device "nvmf_tgt_br" 00:21:36.630 21:34:57 -- nvmf/common.sh@154 -- # true 00:21:36.630 21:34:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.630 Cannot find device "nvmf_tgt_br2" 00:21:36.630 21:34:57 -- nvmf/common.sh@155 -- # true 00:21:36.630 21:34:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:36.630 21:34:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:36.630 Cannot find device "nvmf_tgt_br" 00:21:36.630 21:34:57 -- nvmf/common.sh@157 -- # true 00:21:36.630 21:34:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:36.630 Cannot find device "nvmf_tgt_br2" 00:21:36.630 21:34:57 -- nvmf/common.sh@158 -- # true 00:21:36.630 21:34:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:36.630 21:34:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:36.630 21:34:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.630 21:34:57 -- nvmf/common.sh@161 -- # true 00:21:36.630 21:34:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.631 21:34:57 -- nvmf/common.sh@162 -- # true 00:21:36.631 21:34:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.631 21:34:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.631 21:34:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.631 21:34:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.890 21:34:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.890 21:34:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.890 21:34:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.890 21:34:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:36.890 21:34:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:36.890 21:34:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:36.890 21:34:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:36.890 21:34:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:36.890 21:34:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:36.890 21:34:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.890 21:34:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.890 21:34:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.890 21:34:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:36.890 21:34:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:36.890 21:34:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.890 21:34:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:36.890 21:34:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:36.890 21:34:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:36.890 21:34:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:36.890 21:34:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:36.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:21:36.890 00:21:36.890 --- 10.0.0.2 ping statistics --- 00:21:36.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.890 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:36.890 21:34:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:36.890 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.890 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:21:36.890 00:21:36.890 --- 10.0.0.3 ping statistics --- 00:21:36.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.890 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:36.890 21:34:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:36.890 00:21:36.890 --- 10.0.0.1 ping statistics --- 00:21:36.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.890 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:36.890 21:34:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.890 21:34:57 -- nvmf/common.sh@421 -- # return 0 00:21:36.890 21:34:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:36.890 21:34:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.890 21:34:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:36.890 21:34:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:36.890 21:34:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.890 21:34:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:36.890 21:34:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:36.890 21:34:57 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:21:36.890 21:34:57 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:21:36.890 21:34:57 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:21:36.890 21:34:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:36.890 21:34:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:36.890 21:34:57 -- common/autotest_common.sh@10 -- # set +x 00:21:36.890 21:34:57 -- nvmf/common.sh@469 -- # nvmfpid=74389 00:21:36.890 21:34:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:36.890 21:34:57 -- nvmf/common.sh@470 -- # waitforlisten 74389 00:21:36.890 21:34:57 -- common/autotest_common.sh@819 -- # '[' -z 74389 ']' 00:21:36.890 21:34:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.890 21:34:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:36.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.890 21:34:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.890 21:34:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:36.890 21:34:57 -- common/autotest_common.sh@10 -- # set +x 00:21:37.148 [2024-07-11 21:34:57.861344] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:37.148 [2024-07-11 21:34:57.861448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.148 [2024-07-11 21:34:58.005506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.407 [2024-07-11 21:34:58.101897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:37.407 [2024-07-11 21:34:58.102051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.407 [2024-07-11 21:34:58.102065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.407 [2024-07-11 21:34:58.102075] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.407 [2024-07-11 21:34:58.102210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.407 [2024-07-11 21:34:58.102470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.407 [2024-07-11 21:34:58.102588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.407 [2024-07-11 21:34:58.102637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.975 21:34:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:37.975 21:34:58 -- common/autotest_common.sh@852 -- # return 0 00:21:37.975 21:34:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:37.975 21:34:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:37.975 21:34:58 -- common/autotest_common.sh@10 -- # set +x 00:21:37.975 21:34:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.975 21:34:58 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:38.233 [2024-07-11 21:34:59.145303] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.233 21:34:59 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:38.491 Malloc0 00:21:38.752 21:34:59 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:21:38.752 21:34:59 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:39.013 21:34:59 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.271 [2024-07-11 21:35:00.106153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.271 21:35:00 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:39.529 [2024-07-11 21:35:00.342414] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:39.529 21:35:00 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:21:39.787 21:35:00 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:21:39.787 21:35:00 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:21:39.787 21:35:00 -- common/autotest_common.sh@1177 -- # local i=0 00:21:39.787 21:35:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:39.787 21:35:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:39.787 21:35:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:41.688 21:35:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:41.688 21:35:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:41.688 21:35:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:21:41.946 21:35:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:41.946 21:35:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:41.946 21:35:02 -- common/autotest_common.sh@1187 -- # return 0 00:21:41.946 21:35:02 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:21:41.946 21:35:02 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:21:41.946 21:35:02 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:21:41.946 21:35:02 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:21:41.946 21:35:02 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:21:41.946 21:35:02 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:21:41.946 21:35:02 -- target/multipath.sh@38 -- # return 0 00:21:41.946 21:35:02 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:21:41.946 21:35:02 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:21:41.946 21:35:02 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:21:41.947 21:35:02 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:21:41.947 21:35:02 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:21:41.947 21:35:02 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:21:41.947 21:35:02 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:21:41.947 21:35:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:41.947 21:35:02 -- target/multipath.sh@22 -- # local timeout=20 00:21:41.947 21:35:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:41.947 21:35:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:41.947 21:35:02 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:41.947 21:35:02 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:21:41.947 21:35:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:41.947 21:35:02 -- target/multipath.sh@22 -- # local timeout=20 00:21:41.947 21:35:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:41.947 21:35:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:41.947 21:35:02 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:41.947 21:35:02 -- target/multipath.sh@85 -- # echo numa 00:21:41.947 21:35:02 -- target/multipath.sh@88 -- # fio_pid=74480 00:21:41.947 21:35:02 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:41.947 21:35:02 -- target/multipath.sh@90 -- # sleep 1 00:21:41.947 [global] 00:21:41.947 thread=1 00:21:41.947 invalidate=1 00:21:41.947 rw=randrw 00:21:41.947 time_based=1 00:21:41.947 runtime=6 00:21:41.947 ioengine=libaio 00:21:41.947 direct=1 00:21:41.947 bs=4096 00:21:41.947 iodepth=128 00:21:41.947 norandommap=0 00:21:41.947 numjobs=1 00:21:41.947 00:21:41.947 verify_dump=1 00:21:41.947 verify_backlog=512 00:21:41.947 verify_state_save=0 00:21:41.947 do_verify=1 00:21:41.947 verify=crc32c-intel 00:21:41.947 [job0] 00:21:41.947 filename=/dev/nvme0n1 00:21:41.947 Could not set queue depth (nvme0n1) 00:21:41.947 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:41.947 fio-3.35 00:21:41.947 Starting 1 thread 00:21:42.954 21:35:03 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:43.212 21:35:03 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:43.472 21:35:04 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:21:43.472 21:35:04 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:43.472 21:35:04 -- target/multipath.sh@22 -- # local timeout=20 00:21:43.472 21:35:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:43.472 21:35:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:43.472 21:35:04 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:43.472 21:35:04 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:21:43.472 21:35:04 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:43.472 21:35:04 -- target/multipath.sh@22 -- # local timeout=20 00:21:43.472 21:35:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:43.472 21:35:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:43.472 21:35:04 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:43.472 21:35:04 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:43.472 21:35:04 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:43.730 21:35:04 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:21:43.730 21:35:04 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:43.730 21:35:04 -- target/multipath.sh@22 -- # local timeout=20 00:21:43.730 21:35:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:43.730 21:35:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:43.730 21:35:04 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:43.730 21:35:04 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:21:43.730 21:35:04 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:43.730 21:35:04 -- target/multipath.sh@22 -- # local timeout=20 00:21:43.730 21:35:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:43.730 21:35:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:43.730 21:35:04 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:43.730 21:35:04 -- target/multipath.sh@104 -- # wait 74480 00:21:48.996 00:21:48.996 job0: (groupid=0, jobs=1): err= 0: pid=74501: Thu Jul 11 21:35:09 2024 00:21:48.996 read: IOPS=11.0k, BW=42.9MiB/s (44.9MB/s)(257MiB/6006msec) 00:21:48.996 slat (usec): min=6, max=8426, avg=52.64, stdev=224.24 00:21:48.996 clat (usec): min=867, max=15614, avg=7857.89, stdev=1446.73 00:21:48.996 lat (usec): min=894, max=16190, avg=7910.53, stdev=1452.61 00:21:48.996 clat percentiles (usec): 00:21:48.996 | 1.00th=[ 4113], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7046], 00:21:48.996 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7898], 00:21:48.996 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[11338], 00:21:48.996 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14091], 99.95th=[14746], 00:21:48.996 | 99.99th=[15401] 00:21:48.996 bw ( KiB/s): min=11920, max=28112, per=53.58%, avg=23510.36, stdev=5090.29, samples=11 00:21:48.996 iops : min= 2980, max= 7028, avg=5877.55, stdev=1272.56, samples=11 00:21:48.996 write: IOPS=6459, BW=25.2MiB/s (26.5MB/s)(138MiB/5477msec); 0 zone resets 00:21:48.996 slat (usec): min=12, max=2650, avg=62.33, stdev=152.47 00:21:48.996 clat (usec): min=870, max=14942, avg=6901.35, stdev=1229.50 00:21:48.996 lat (usec): min=974, max=14970, avg=6963.68, stdev=1234.36 00:21:48.996 clat percentiles (usec): 00:21:48.996 | 1.00th=[ 3163], 5.00th=[ 4146], 10.00th=[ 5342], 20.00th=[ 6390], 00:21:48.996 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7242], 00:21:48.996 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:21:48.996 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12649], 99.95th=[12911], 00:21:48.996 | 99.99th=[13566] 00:21:48.996 bw ( KiB/s): min=12088, max=27792, per=91.02%, avg=23517.82, stdev=4905.50, samples=11 00:21:48.996 iops : min= 3022, max= 6948, avg=5879.36, stdev=1226.33, samples=11 00:21:48.996 lat (usec) : 1000=0.01% 00:21:48.996 lat (msec) : 2=0.01%, 4=2.05%, 10=92.07%, 20=5.87% 00:21:48.996 cpu : usr=5.71%, sys=22.20%, ctx=5786, majf=0, minf=84 00:21:48.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.996 issued rwts: total=65887,35378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.996 00:21:48.996 Run status group 0 (all jobs): 00:21:48.996 READ: bw=42.9MiB/s (44.9MB/s), 42.9MiB/s-42.9MiB/s (44.9MB/s-44.9MB/s), io=257MiB (270MB), run=6006-6006msec 00:21:48.996 WRITE: bw=25.2MiB/s (26.5MB/s), 25.2MiB/s-25.2MiB/s (26.5MB/s-26.5MB/s), io=138MiB (145MB), run=5477-5477msec 00:21:48.996 00:21:48.996 Disk stats (read/write): 00:21:48.996 nvme0n1: ios=64930/34699, merge=0/0, ticks=487454/224805, in_queue=712259, util=98.63% 00:21:48.996 21:35:09 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:48.996 21:35:09 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:48.996 21:35:09 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:21:48.996 21:35:09 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:48.996 21:35:09 -- target/multipath.sh@22 -- # local timeout=20 00:21:48.996 21:35:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:48.996 21:35:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:48.996 21:35:09 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:48.996 21:35:09 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:21:48.996 21:35:09 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:48.996 21:35:09 -- target/multipath.sh@22 -- # local timeout=20 00:21:48.996 21:35:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:48.996 21:35:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:48.996 21:35:09 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:48.996 21:35:09 -- target/multipath.sh@113 -- # echo round-robin 00:21:48.996 21:35:09 -- target/multipath.sh@116 -- # fio_pid=74582 00:21:48.996 21:35:09 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:48.996 21:35:09 -- target/multipath.sh@118 -- # sleep 1 00:21:48.996 [global] 00:21:48.996 thread=1 00:21:48.996 invalidate=1 00:21:48.996 rw=randrw 00:21:48.996 time_based=1 00:21:48.996 runtime=6 00:21:48.996 ioengine=libaio 00:21:48.996 direct=1 00:21:48.996 bs=4096 00:21:48.996 iodepth=128 00:21:48.996 norandommap=0 00:21:48.996 numjobs=1 00:21:48.996 00:21:48.996 verify_dump=1 00:21:48.996 verify_backlog=512 00:21:48.996 verify_state_save=0 00:21:48.996 do_verify=1 00:21:48.996 verify=crc32c-intel 00:21:48.996 [job0] 00:21:48.997 filename=/dev/nvme0n1 00:21:48.997 Could not set queue depth (nvme0n1) 00:21:48.997 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:48.997 fio-3.35 00:21:48.997 Starting 1 thread 00:21:49.930 21:35:10 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:50.187 21:35:10 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:50.445 21:35:11 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:21:50.445 21:35:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:50.445 21:35:11 -- target/multipath.sh@22 -- # local timeout=20 00:21:50.445 21:35:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:50.445 21:35:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:50.445 21:35:11 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:50.445 21:35:11 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:21:50.445 21:35:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:50.445 21:35:11 -- target/multipath.sh@22 -- # local timeout=20 00:21:50.445 21:35:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:50.445 21:35:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:50.445 21:35:11 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:50.445 21:35:11 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:50.703 21:35:11 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:50.962 21:35:11 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:21:50.962 21:35:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:50.962 21:35:11 -- target/multipath.sh@22 -- # local timeout=20 00:21:50.962 21:35:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:50.962 21:35:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:50.962 21:35:11 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:50.962 21:35:11 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:21:50.962 21:35:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:50.962 21:35:11 -- target/multipath.sh@22 -- # local timeout=20 00:21:50.962 21:35:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:50.962 21:35:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:50.962 21:35:11 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:50.962 21:35:11 -- target/multipath.sh@132 -- # wait 74582 00:21:55.146 00:21:55.146 job0: (groupid=0, jobs=1): err= 0: pid=74603: Thu Jul 11 21:35:15 2024 00:21:55.146 read: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(298MiB/6007msec) 00:21:55.146 slat (usec): min=2, max=8877, avg=40.40, stdev=190.73 00:21:55.146 clat (usec): min=226, max=13976, avg=7050.33, stdev=1701.34 00:21:55.146 lat (usec): min=249, max=13986, avg=7090.73, stdev=1714.48 00:21:55.146 clat percentiles (usec): 00:21:55.146 | 1.00th=[ 2933], 5.00th=[ 3949], 10.00th=[ 4686], 20.00th=[ 5669], 00:21:55.146 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7504], 00:21:55.146 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8586], 95.00th=[ 9634], 00:21:55.146 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12649], 99.95th=[12780], 00:21:55.146 | 99.99th=[13173] 00:21:55.146 bw ( KiB/s): min= 7976, max=43384, per=52.75%, avg=26757.82, stdev=8895.24, samples=11 00:21:55.146 iops : min= 1994, max=10846, avg=6689.45, stdev=2223.81, samples=11 00:21:55.146 write: IOPS=7490, BW=29.3MiB/s (30.7MB/s)(149MiB/5108msec); 0 zone resets 00:21:55.146 slat (usec): min=4, max=1824, avg=48.53, stdev=117.67 00:21:55.146 clat (usec): min=1352, max=13173, avg=5886.60, stdev=1652.83 00:21:55.146 lat (usec): min=1451, max=13223, avg=5935.13, stdev=1668.01 00:21:55.146 clat percentiles (usec): 00:21:55.146 | 1.00th=[ 2507], 5.00th=[ 3163], 10.00th=[ 3523], 20.00th=[ 4113], 00:21:55.146 | 30.00th=[ 4686], 40.00th=[ 5669], 50.00th=[ 6456], 60.00th=[ 6783], 00:21:55.146 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7832], 00:21:55.146 | 99.00th=[ 9634], 99.50th=[10814], 99.90th=[11863], 99.95th=[12387], 00:21:55.146 | 99.99th=[13042] 00:21:55.146 bw ( KiB/s): min= 8536, max=42576, per=89.17%, avg=26715.45, stdev=8644.05, samples=11 00:21:55.146 iops : min= 2134, max=10644, avg=6678.82, stdev=2160.95, samples=11 00:21:55.146 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:21:55.146 lat (msec) : 2=0.12%, 4=9.34%, 10=87.27%, 20=3.23% 00:21:55.146 cpu : usr=6.29%, sys=24.74%, ctx=6441, majf=0, minf=108 00:21:55.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:55.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:55.146 issued rwts: total=76180,38259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:55.146 00:21:55.146 Run status group 0 (all jobs): 00:21:55.146 READ: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=298MiB (312MB), run=6007-6007msec 00:21:55.146 WRITE: bw=29.3MiB/s (30.7MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=149MiB (157MB), run=5108-5108msec 00:21:55.146 00:21:55.146 Disk stats (read/write): 00:21:55.146 nvme0n1: ios=74938/37888, merge=0/0, ticks=497129/204658, in_queue=701787, util=98.61% 00:21:55.146 21:35:15 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:55.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:55.146 21:35:15 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:55.146 21:35:15 -- common/autotest_common.sh@1198 -- # local i=0 00:21:55.146 21:35:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:55.146 21:35:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:55.146 21:35:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:55.146 21:35:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:55.146 21:35:15 -- common/autotest_common.sh@1210 -- # return 0 00:21:55.146 21:35:15 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.404 21:35:16 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:21:55.404 21:35:16 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:21:55.404 21:35:16 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:21:55.404 21:35:16 -- target/multipath.sh@144 -- # nvmftestfini 00:21:55.404 21:35:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:55.404 21:35:16 -- nvmf/common.sh@116 -- # sync 00:21:55.404 21:35:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:55.404 21:35:16 -- nvmf/common.sh@119 -- # set +e 00:21:55.404 21:35:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:55.404 21:35:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:55.404 rmmod nvme_tcp 00:21:55.404 rmmod nvme_fabrics 00:21:55.404 rmmod nvme_keyring 00:21:55.404 21:35:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:55.404 21:35:16 -- nvmf/common.sh@123 -- # set -e 00:21:55.404 21:35:16 -- nvmf/common.sh@124 -- # return 0 00:21:55.404 21:35:16 -- nvmf/common.sh@477 -- # '[' -n 74389 ']' 00:21:55.404 21:35:16 -- nvmf/common.sh@478 -- # killprocess 74389 00:21:55.404 21:35:16 -- common/autotest_common.sh@926 -- # '[' -z 74389 ']' 00:21:55.404 21:35:16 -- common/autotest_common.sh@930 -- # kill -0 74389 00:21:55.404 21:35:16 -- common/autotest_common.sh@931 -- # uname 00:21:55.404 21:35:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:55.404 21:35:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74389 00:21:55.404 21:35:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:55.404 21:35:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:55.404 21:35:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74389' 00:21:55.404 killing process with pid 74389 00:21:55.404 21:35:16 -- common/autotest_common.sh@945 -- # kill 74389 00:21:55.404 21:35:16 -- common/autotest_common.sh@950 -- # wait 74389 00:21:55.662 21:35:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:55.662 21:35:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:55.662 21:35:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:55.662 21:35:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.662 21:35:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:55.662 21:35:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.662 21:35:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.662 21:35:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.662 21:35:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:55.662 ************************************ 00:21:55.662 END TEST nvmf_multipath 00:21:55.662 ************************************ 00:21:55.662 00:21:55.662 real 0m19.279s 00:21:55.662 user 1m13.004s 00:21:55.662 sys 0m9.357s 00:21:55.662 21:35:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.662 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:21:55.922 21:35:16 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:55.922 21:35:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:55.922 21:35:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.922 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:21:55.922 ************************************ 00:21:55.922 START TEST nvmf_zcopy 00:21:55.922 ************************************ 00:21:55.922 21:35:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:55.922 * Looking for test storage... 00:21:55.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:55.922 21:35:16 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.922 21:35:16 -- nvmf/common.sh@7 -- # uname -s 00:21:55.922 21:35:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.922 21:35:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.922 21:35:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.922 21:35:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.922 21:35:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.922 21:35:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.922 21:35:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.922 21:35:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.922 21:35:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.922 21:35:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.922 21:35:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:55.922 21:35:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:21:55.922 21:35:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.922 21:35:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.922 21:35:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.922 21:35:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.922 21:35:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.922 21:35:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.922 21:35:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.922 21:35:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.922 21:35:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.922 21:35:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.922 21:35:16 -- paths/export.sh@5 -- # export PATH 00:21:55.922 21:35:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.922 21:35:16 -- nvmf/common.sh@46 -- # : 0 00:21:55.922 21:35:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:55.922 21:35:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:55.922 21:35:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:55.922 21:35:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.922 21:35:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.922 21:35:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:55.922 21:35:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:55.922 21:35:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:55.922 21:35:16 -- target/zcopy.sh@12 -- # nvmftestinit 00:21:55.922 21:35:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:55.922 21:35:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.922 21:35:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:55.922 21:35:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:55.922 21:35:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:55.922 21:35:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.922 21:35:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.922 21:35:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.922 21:35:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:55.922 21:35:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:55.922 21:35:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:55.922 21:35:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:55.922 21:35:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:55.922 21:35:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:55.922 21:35:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.922 21:35:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.922 21:35:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:55.922 21:35:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:55.922 21:35:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:55.922 21:35:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:55.922 21:35:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:55.922 21:35:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.922 21:35:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:55.922 21:35:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:55.922 21:35:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:55.923 21:35:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:55.923 21:35:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:55.923 21:35:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:55.923 Cannot find device "nvmf_tgt_br" 00:21:55.923 21:35:16 -- nvmf/common.sh@154 -- # true 00:21:55.923 21:35:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.923 Cannot find device "nvmf_tgt_br2" 00:21:55.923 21:35:16 -- nvmf/common.sh@155 -- # true 00:21:55.923 21:35:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:55.923 21:35:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:55.923 Cannot find device "nvmf_tgt_br" 00:21:55.923 21:35:16 -- nvmf/common.sh@157 -- # true 00:21:55.923 21:35:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:55.923 Cannot find device "nvmf_tgt_br2" 00:21:55.923 21:35:16 -- nvmf/common.sh@158 -- # true 00:21:55.923 21:35:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:55.923 21:35:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:56.181 21:35:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.181 21:35:16 -- nvmf/common.sh@161 -- # true 00:21:56.181 21:35:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.181 21:35:16 -- nvmf/common.sh@162 -- # true 00:21:56.181 21:35:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.181 21:35:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.181 21:35:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.181 21:35:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.181 21:35:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.181 21:35:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.181 21:35:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.181 21:35:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:56.181 21:35:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:56.181 21:35:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:56.181 21:35:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:56.181 21:35:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:56.181 21:35:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:56.181 21:35:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:56.181 21:35:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:56.181 21:35:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:56.181 21:35:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:56.181 21:35:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:56.181 21:35:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:56.181 21:35:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:56.181 21:35:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:56.181 21:35:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:56.181 21:35:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:56.181 21:35:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:56.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:21:56.181 00:21:56.181 --- 10.0.0.2 ping statistics --- 00:21:56.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.181 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:56.181 21:35:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:56.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:56.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:21:56.181 00:21:56.181 --- 10.0.0.3 ping statistics --- 00:21:56.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.181 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:56.181 21:35:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:56.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:56.181 00:21:56.181 --- 10.0.0.1 ping statistics --- 00:21:56.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.181 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:56.181 21:35:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.181 21:35:17 -- nvmf/common.sh@421 -- # return 0 00:21:56.181 21:35:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:56.181 21:35:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.181 21:35:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:56.181 21:35:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:56.181 21:35:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.181 21:35:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:56.181 21:35:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:56.181 21:35:17 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:56.181 21:35:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:56.181 21:35:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:56.181 21:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:56.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.181 21:35:17 -- nvmf/common.sh@469 -- # nvmfpid=74851 00:21:56.181 21:35:17 -- nvmf/common.sh@470 -- # waitforlisten 74851 00:21:56.181 21:35:17 -- common/autotest_common.sh@819 -- # '[' -z 74851 ']' 00:21:56.181 21:35:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.181 21:35:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.181 21:35:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.181 21:35:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.181 21:35:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.181 21:35:17 -- common/autotest_common.sh@10 -- # set +x 00:21:56.522 [2024-07-11 21:35:17.167773] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:56.522 [2024-07-11 21:35:17.167889] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.522 [2024-07-11 21:35:17.306234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.522 [2024-07-11 21:35:17.418884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:56.522 [2024-07-11 21:35:17.419121] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.522 [2024-07-11 21:35:17.419155] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.522 [2024-07-11 21:35:17.419180] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.522 [2024-07-11 21:35:17.419242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.458 21:35:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.458 21:35:18 -- common/autotest_common.sh@852 -- # return 0 00:21:57.458 21:35:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:57.458 21:35:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:57.458 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:57.458 21:35:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.459 21:35:18 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:21:57.459 21:35:18 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:21:57.459 21:35:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.459 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:57.459 [2024-07-11 21:35:18.104780] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.459 21:35:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.459 21:35:18 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:57.459 21:35:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.459 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:57.459 21:35:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.459 21:35:18 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.459 21:35:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.459 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:57.459 [2024-07-11 21:35:18.120885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.459 21:35:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.459 21:35:18 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:57.459 21:35:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.459 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:57.459 21:35:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.459 21:35:18 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:21:57.459 21:35:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.459 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:57.459 malloc0 00:21:57.459 21:35:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.459 21:35:18 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:57.459 21:35:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.459 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:57.459 21:35:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.459 21:35:18 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:21:57.459 21:35:18 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:21:57.459 21:35:18 -- nvmf/common.sh@520 -- # config=() 00:21:57.459 21:35:18 -- nvmf/common.sh@520 -- # local subsystem config 00:21:57.459 21:35:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:57.459 21:35:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:57.459 { 00:21:57.459 "params": { 00:21:57.459 "name": "Nvme$subsystem", 00:21:57.459 "trtype": "$TEST_TRANSPORT", 00:21:57.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.459 "adrfam": "ipv4", 00:21:57.459 "trsvcid": "$NVMF_PORT", 00:21:57.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.459 "hdgst": ${hdgst:-false}, 00:21:57.459 "ddgst": ${ddgst:-false} 00:21:57.459 }, 00:21:57.459 "method": "bdev_nvme_attach_controller" 00:21:57.459 } 00:21:57.459 EOF 00:21:57.459 )") 00:21:57.459 21:35:18 -- nvmf/common.sh@542 -- # cat 00:21:57.459 21:35:18 -- nvmf/common.sh@544 -- # jq . 00:21:57.459 21:35:18 -- nvmf/common.sh@545 -- # IFS=, 00:21:57.459 21:35:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:57.459 "params": { 00:21:57.459 "name": "Nvme1", 00:21:57.459 "trtype": "tcp", 00:21:57.459 "traddr": "10.0.0.2", 00:21:57.459 "adrfam": "ipv4", 00:21:57.459 "trsvcid": "4420", 00:21:57.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.459 "hdgst": false, 00:21:57.459 "ddgst": false 00:21:57.459 }, 00:21:57.459 "method": "bdev_nvme_attach_controller" 00:21:57.459 }' 00:21:57.459 [2024-07-11 21:35:18.205610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:57.459 [2024-07-11 21:35:18.205712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74884 ] 00:21:57.459 [2024-07-11 21:35:18.344715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.718 [2024-07-11 21:35:18.442676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.718 Running I/O for 10 seconds... 00:22:07.688 00:22:07.688 Latency(us) 00:22:07.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.688 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:07.688 Verification LBA range: start 0x0 length 0x1000 00:22:07.688 Nvme1n1 : 10.01 8465.57 66.14 0.00 0.00 15082.11 1325.61 23116.33 00:22:07.688 =================================================================================================================== 00:22:07.688 Total : 8465.57 66.14 0.00 0.00 15082.11 1325.61 23116.33 00:22:07.945 21:35:28 -- target/zcopy.sh@39 -- # perfpid=75006 00:22:07.945 21:35:28 -- target/zcopy.sh@41 -- # xtrace_disable 00:22:07.945 21:35:28 -- common/autotest_common.sh@10 -- # set +x 00:22:07.945 21:35:28 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:07.945 21:35:28 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:07.945 21:35:28 -- nvmf/common.sh@520 -- # config=() 00:22:07.945 21:35:28 -- nvmf/common.sh@520 -- # local subsystem config 00:22:07.945 21:35:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.945 21:35:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.945 { 00:22:07.945 "params": { 00:22:07.945 "name": "Nvme$subsystem", 00:22:07.945 "trtype": "$TEST_TRANSPORT", 00:22:07.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.945 "adrfam": "ipv4", 00:22:07.945 "trsvcid": "$NVMF_PORT", 00:22:07.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.945 "hdgst": ${hdgst:-false}, 00:22:07.945 "ddgst": ${ddgst:-false} 00:22:07.945 }, 00:22:07.945 "method": "bdev_nvme_attach_controller" 00:22:07.945 } 00:22:07.945 EOF 00:22:07.945 )") 00:22:07.945 21:35:28 -- nvmf/common.sh@542 -- # cat 00:22:07.945 [2024-07-11 21:35:28.854776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.945 [2024-07-11 21:35:28.854988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.945 21:35:28 -- nvmf/common.sh@544 -- # jq . 00:22:07.945 21:35:28 -- nvmf/common.sh@545 -- # IFS=, 00:22:07.945 21:35:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:07.945 "params": { 00:22:07.945 "name": "Nvme1", 00:22:07.945 "trtype": "tcp", 00:22:07.945 "traddr": "10.0.0.2", 00:22:07.945 "adrfam": "ipv4", 00:22:07.945 "trsvcid": "4420", 00:22:07.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.945 "hdgst": false, 00:22:07.945 "ddgst": false 00:22:07.945 }, 00:22:07.945 "method": "bdev_nvme_attach_controller" 00:22:07.945 }' 00:22:07.945 [2024-07-11 21:35:28.862729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.945 [2024-07-11 21:35:28.862763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.945 [2024-07-11 21:35:28.870722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.945 [2024-07-11 21:35:28.870752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.945 [2024-07-11 21:35:28.878726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.945 [2024-07-11 21:35:28.878875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.945 [2024-07-11 21:35:28.886743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.945 [2024-07-11 21:35:28.886919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.238 [2024-07-11 21:35:28.898743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.238 [2024-07-11 21:35:28.898961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.238 [2024-07-11 21:35:28.900454] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:08.238 [2024-07-11 21:35:28.901230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75006 ] 00:22:08.238 [2024-07-11 21:35:28.910736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.238 [2024-07-11 21:35:28.910933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.238 [2024-07-11 21:35:28.922732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.238 [2024-07-11 21:35:28.922890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.238 [2024-07-11 21:35:28.934740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.238 [2024-07-11 21:35:28.934921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.238 [2024-07-11 21:35:28.946760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:28.947016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:28.958784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:28.958963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:28.970758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:28.970798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:28.982755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:28.982790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:28.994757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:28.994792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.006755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.006804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.018776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.018829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.030842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.030903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.042804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.042858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.045477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.239 [2024-07-11 21:35:29.050793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.050832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.058800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.058846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.066842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.066905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.074795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.074837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.082789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.082827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.090796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.090832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.098795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.098832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.106818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.106860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.114805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.114840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.122800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.122835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.130815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.130858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.138832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.138882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.140501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.239 [2024-07-11 21:35:29.146825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.146872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.158850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.158916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.166859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.166917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.174852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.174907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.239 [2024-07-11 21:35:29.182849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.239 [2024-07-11 21:35:29.182899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.190846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.190895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.198843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.198887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.206849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.206888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.214865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.214910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.222853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.222898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.230844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.230880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.238861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.238902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.246889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.246944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.254895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.254942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.262900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.262941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.270896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.270937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.278897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.278936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.286919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.286970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.294966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.295009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.302927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.302966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 Running I/O for 5 seconds... 00:22:08.497 [2024-07-11 21:35:29.310957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.310996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.325513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.497 [2024-07-11 21:35:29.325558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.497 [2024-07-11 21:35:29.336092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.336133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.347780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.347822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.359033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.359071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.374737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.374804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.392214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.392268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.402411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.402456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.414438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.414499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.426087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.426129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.498 [2024-07-11 21:35:29.441674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.498 [2024-07-11 21:35:29.441719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.457788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.457831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.467722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.467763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.481719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.481764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.493227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.493270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.504100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.504141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.520106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.520148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.530348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.530393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.542088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.542133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.553394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.553437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.571182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.571237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.581195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.581238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.593546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.593589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.604690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.604732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.621127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.621175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.638317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.638364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.654782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.654831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.664249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.664291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.680102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.680150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.690853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.690900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.756 [2024-07-11 21:35:29.704868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.756 [2024-07-11 21:35:29.704912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.720886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.720938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.730720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.730764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.742648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.742690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.753745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.753793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.764667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.764708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.775750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.775793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.792213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.792275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.802709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.802761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.814721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.814779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.825764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.825810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.837155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.837200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.848351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.848414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.859949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.859998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.871230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.871283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.887999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.888062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.904022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.904092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.921888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.921947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.938168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.938235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.014 [2024-07-11 21:35:29.958334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.014 [2024-07-11 21:35:29.958395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:29.973570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:29.973620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:29.989916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:29.989980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.008141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.008209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.022895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.022962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.038520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.038582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.048314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.048363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.060309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.060362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.071218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.071268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.082643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.082687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.095415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.095464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.271 [2024-07-11 21:35:30.105700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.271 [2024-07-11 21:35:30.105747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.121537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.121607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.131985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.132031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.143959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.144002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.155530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.155579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.166996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.167044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.179703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.179753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.189076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.189127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.205517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.205569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.272 [2024-07-11 21:35:30.216182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.272 [2024-07-11 21:35:30.216223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.228247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.228309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.243437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.243505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.253846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.253898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.265848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.265893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.276740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.276790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.292028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.292074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.302646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.302692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.314368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.314413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.325170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.325217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.340068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.340114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.356470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.356543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.375304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.375357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.390659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.390713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.410203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.410263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.421843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.421903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.435294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.435337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.445045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.445087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.457169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.457206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.529 [2024-07-11 21:35:30.472430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.529 [2024-07-11 21:35:30.472504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.489056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.489109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.504859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.504914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.514865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.514913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.530030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.530075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.540557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.540602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.555629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.555668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.573404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.573448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.587870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.587912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.603566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.603605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.622789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.622837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.637584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.637629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.653198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.653251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.672011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.672060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.686981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.687025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.697281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.697324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.712536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.712581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:09.786 [2024-07-11 21:35:30.728422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:09.786 [2024-07-11 21:35:30.728474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.737882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.737924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.753978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.754022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.763996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.764038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.779655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.779703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.796818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.796864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.811763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.811813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.827795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.827853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.846149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.846196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.860829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.860878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.870756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.870805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.882664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.882711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.893934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.893985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.905117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.905162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.916296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.916349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.927098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.927149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.938570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.938617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.949572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.949615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.960291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.960343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.971969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.972018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.044 [2024-07-11 21:35:30.983313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.044 [2024-07-11 21:35:30.983356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:30.994683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:30.994727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.005609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.005651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.016889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.016935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.028268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.028313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.039512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.039552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.051029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.051079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.062736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.062797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.074194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.074250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.085611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.085677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.101012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.101079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.111782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.111836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.123220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.123275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.134378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.134423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.146033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.146077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.157090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.157139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.167951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.167998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.179559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.179610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.190635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.190693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.201723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.201774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.213250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.213302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.224747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.224794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.237451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.237510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.303 [2024-07-11 21:35:31.247887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.303 [2024-07-11 21:35:31.247931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.259850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.259900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.271238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.271284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.283865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.283921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.294462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.294523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.307183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.307234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.318532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.318582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.329680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.329723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.340772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.340814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.351942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.351984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.363072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.363115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.374091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.374133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.386726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.386768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.396890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.396932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.408344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.408386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.419319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.419360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.430921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.430978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.446026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.446075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.455855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.455897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.467830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.467876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.478776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.478820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.489701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.489747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.561 [2024-07-11 21:35:31.505864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.561 [2024-07-11 21:35:31.505912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.516287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.516331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.528184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.528225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.539252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.539297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.555171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.555217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.569965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.570006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.579291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.579335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.593375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.593419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.604237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.604291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.617946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.617999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.632825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.632871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.642577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.642621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.655033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.655072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.665945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.665988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.681949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.681998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.697908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.697951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.713589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.713629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.723274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.723315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.739924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.739977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:10.819 [2024-07-11 21:35:31.755384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:10.819 [2024-07-11 21:35:31.755433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.770444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.770508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.780062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.780109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.792217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.792272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.808008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.808057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.825658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.825702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.836214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.836267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.850786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.850845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.866397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.866451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.876780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.876829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.888982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.889038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.900142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.900189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.911508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.911551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.922888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.922931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.934428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.934471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.946364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.946409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.964314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.964362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.979754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.979810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:31.997308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:31.997363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:32.007274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:32.007317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.077 [2024-07-11 21:35:32.019130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.077 [2024-07-11 21:35:32.019173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.030241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.335 [2024-07-11 21:35:32.030285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.043382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.335 [2024-07-11 21:35:32.043440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.059269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.335 [2024-07-11 21:35:32.059321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.077653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.335 [2024-07-11 21:35:32.077700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.088908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.335 [2024-07-11 21:35:32.088953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.106571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.335 [2024-07-11 21:35:32.106626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.123613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.335 [2024-07-11 21:35:32.123664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.335 [2024-07-11 21:35:32.133825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.133870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.145658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.145714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.160810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.160867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.178002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.178073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.194398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.194452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.212847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.212904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.223637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.223697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.235021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.235077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.246696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.246748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.262157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.262226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.336 [2024-07-11 21:35:32.279088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.336 [2024-07-11 21:35:32.279151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.295431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.295479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.305370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.305412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.320506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.320568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.337573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.337646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.352845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.352911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.362781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.362842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.375523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.375595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.391129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.391184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.407127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.407185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.423674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.423722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.433643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.433689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.449164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.449230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.466321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.466379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.476496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.476541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.488670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.488715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.500580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.500621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.594 [2024-07-11 21:35:32.515837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.594 [2024-07-11 21:35:32.515881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.595 [2024-07-11 21:35:32.532801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.595 [2024-07-11 21:35:32.532845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.550048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.550089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.560023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.560063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.572333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.572378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.583744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.583784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.597365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.597405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.613155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.613194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.623388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.623434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.635419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.635476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.646899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.646944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.658469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.658531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.670963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.671011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.681150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.681192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.692954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.693003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.708354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.708400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.718478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.718530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.730181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.730230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.745220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.745260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.755782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.755819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.767438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.767479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.779314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.779356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:11.853 [2024-07-11 21:35:32.794702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:11.853 [2024-07-11 21:35:32.794740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.805159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.805201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.819916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.819953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.829896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.829931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.842045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.842083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.852966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.853006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.870312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.870369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.886435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.886510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.903506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.903560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.914142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.914185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.926242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.926280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.937256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.937293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.950181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.950227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.966737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.966777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.977143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.977179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:32.992530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:32.992569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:33.008438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:33.008479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:33.018523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:33.018562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:33.030510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:33.030547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:33.041879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:33.041916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.112 [2024-07-11 21:35:33.058248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.112 [2024-07-11 21:35:33.058290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.075764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.075811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.091178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.091229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.100778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.100820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.113816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.113860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.129093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.129154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.147334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.147397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.158083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.158132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.169774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.169819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.180886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.180938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.193125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.193171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.202889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.202928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.219579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.219629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.235665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.235708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.245352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.245402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.260599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.260649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.270999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.271049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.286276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.286323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.296766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.296803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.370 [2024-07-11 21:35:33.308541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.370 [2024-07-11 21:35:33.308578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.319828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.319868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.336543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.336594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.352561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.352616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.362879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.362923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.377971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.378007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.388688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.388724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.400186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.400223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.411595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.411630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.424793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.424829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.440029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.440068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.458533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.458572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.472719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.472757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.482977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.483014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.495688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.495726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.627 [2024-07-11 21:35:33.506817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.627 [2024-07-11 21:35:33.506853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.628 [2024-07-11 21:35:33.522837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.628 [2024-07-11 21:35:33.522880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.628 [2024-07-11 21:35:33.539371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.628 [2024-07-11 21:35:33.539420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.628 [2024-07-11 21:35:33.555034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.628 [2024-07-11 21:35:33.555074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.628 [2024-07-11 21:35:33.565001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.628 [2024-07-11 21:35:33.565037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.885 [2024-07-11 21:35:33.578050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.885 [2024-07-11 21:35:33.578090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.885 [2024-07-11 21:35:33.589336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.885 [2024-07-11 21:35:33.589375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.885 [2024-07-11 21:35:33.600615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.600653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.611992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.612038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.623716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.623763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.636499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.636543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.653184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.653226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.669200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.669251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.679235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.679281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.694458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.694523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.705201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.705243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.720715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.720759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.731320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.731360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.746587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.746634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.762879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.762925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.773290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.773328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.789299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.789359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.805016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.805061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.814912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.814949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.886 [2024-07-11 21:35:33.827328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.886 [2024-07-11 21:35:33.827378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.838649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.838684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.850877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.850918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.860718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.860759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.877497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.877550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.891560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.891615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.907699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.907745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.924117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.924163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.940133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.940180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.958373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.958414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.973090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.973131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.982379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.982421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:33.999840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:33.999881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:34.015401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:34.015453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:34.034035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:34.034079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:34.048980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:34.049019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:34.058680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:34.058727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:34.070756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:34.070795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:34.081671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:34.081713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.144 [2024-07-11 21:35:34.093112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.144 [2024-07-11 21:35:34.093150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.104576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.104611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.117973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.118011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.134612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.134655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.144747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.144802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.156651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.156694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.167606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.167650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.185149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.185206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.200845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.200896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.210373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.210415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.224587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.224639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.239634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.239682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.249281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.249330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.265637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.265708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.282472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.282544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.298602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.298660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.308593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.308635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.319058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.319102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 00:22:13.467 Latency(us) 00:22:13.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.467 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:13.467 Nvme1n1 : 5.01 11186.51 87.39 0.00 0.00 11427.03 4825.83 23831.27 00:22:13.467 =================================================================================================================== 00:22:13.467 Total : 11186.51 87.39 0.00 0.00 11427.03 4825.83 23831.27 00:22:13.467 [2024-07-11 21:35:34.323522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.323556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.331521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.331559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.339530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.339573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.467 [2024-07-11 21:35:34.347540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.467 [2024-07-11 21:35:34.347585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.468 [2024-07-11 21:35:34.355541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.468 [2024-07-11 21:35:34.355588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.468 [2024-07-11 21:35:34.363548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.468 [2024-07-11 21:35:34.363598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.468 [2024-07-11 21:35:34.371551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.468 [2024-07-11 21:35:34.371599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.468 [2024-07-11 21:35:34.379553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.468 [2024-07-11 21:35:34.379604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.468 [2024-07-11 21:35:34.387549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.468 [2024-07-11 21:35:34.387598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.468 [2024-07-11 21:35:34.395556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.468 [2024-07-11 21:35:34.395606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.403559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.403605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.411564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.411611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.419559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.419607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.427562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.427609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.435563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.435607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.443561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.443607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.451554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.451593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.459560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.459599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.467573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.467619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.475576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.475622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.483569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.483609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.491576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.491621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.725 [2024-07-11 21:35:34.499582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.725 [2024-07-11 21:35:34.499629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.726 [2024-07-11 21:35:34.507577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.726 [2024-07-11 21:35:34.507617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.726 [2024-07-11 21:35:34.515561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.726 [2024-07-11 21:35:34.515593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.726 [2024-07-11 21:35:34.531580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.726 [2024-07-11 21:35:34.531620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.726 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75006) - No such process 00:22:13.726 21:35:34 -- target/zcopy.sh@49 -- # wait 75006 00:22:13.726 21:35:34 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:13.726 21:35:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.726 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:22:13.726 21:35:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.726 21:35:34 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:13.726 21:35:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.726 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:22:13.726 delay0 00:22:13.726 21:35:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.726 21:35:34 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:13.726 21:35:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.726 21:35:34 -- common/autotest_common.sh@10 -- # set +x 00:22:13.726 21:35:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.726 21:35:34 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:13.983 [2024-07-11 21:35:34.727812] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:20.574 Initializing NVMe Controllers 00:22:20.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:20.574 Initialization complete. Launching workers. 00:22:20.574 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 89 00:22:20.574 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 33 00:22:20.574 success 248, unsuccess 128, failed 0 00:22:20.574 21:35:40 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:22:20.574 21:35:40 -- target/zcopy.sh@60 -- # nvmftestfini 00:22:20.574 21:35:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:20.574 21:35:40 -- nvmf/common.sh@116 -- # sync 00:22:20.574 21:35:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:20.574 21:35:40 -- nvmf/common.sh@119 -- # set +e 00:22:20.574 21:35:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:20.574 21:35:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:20.574 rmmod nvme_tcp 00:22:20.574 rmmod nvme_fabrics 00:22:20.574 rmmod nvme_keyring 00:22:20.574 21:35:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:20.574 21:35:40 -- nvmf/common.sh@123 -- # set -e 00:22:20.574 21:35:40 -- nvmf/common.sh@124 -- # return 0 00:22:20.574 21:35:40 -- nvmf/common.sh@477 -- # '[' -n 74851 ']' 00:22:20.574 21:35:40 -- nvmf/common.sh@478 -- # killprocess 74851 00:22:20.574 21:35:40 -- common/autotest_common.sh@926 -- # '[' -z 74851 ']' 00:22:20.574 21:35:40 -- common/autotest_common.sh@930 -- # kill -0 74851 00:22:20.574 21:35:40 -- common/autotest_common.sh@931 -- # uname 00:22:20.574 21:35:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:20.574 21:35:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74851 00:22:20.574 21:35:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:20.574 21:35:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:20.574 killing process with pid 74851 00:22:20.574 21:35:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74851' 00:22:20.574 21:35:40 -- common/autotest_common.sh@945 -- # kill 74851 00:22:20.574 21:35:40 -- common/autotest_common.sh@950 -- # wait 74851 00:22:20.574 21:35:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:20.574 21:35:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:20.574 21:35:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:20.574 21:35:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.574 21:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.574 21:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.574 21:35:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:20.574 00:22:20.574 real 0m24.527s 00:22:20.574 user 0m39.721s 00:22:20.574 sys 0m7.020s 00:22:20.574 21:35:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.574 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:22:20.574 ************************************ 00:22:20.574 END TEST nvmf_zcopy 00:22:20.574 ************************************ 00:22:20.574 21:35:41 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:20.574 21:35:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:20.574 21:35:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:20.574 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:22:20.574 ************************************ 00:22:20.574 START TEST nvmf_nmic 00:22:20.574 ************************************ 00:22:20.574 21:35:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:20.574 * Looking for test storage... 00:22:20.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:20.574 21:35:41 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:20.574 21:35:41 -- nvmf/common.sh@7 -- # uname -s 00:22:20.574 21:35:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.574 21:35:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.574 21:35:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.574 21:35:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.574 21:35:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.574 21:35:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.574 21:35:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.574 21:35:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.574 21:35:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.574 21:35:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:20.574 21:35:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:20.574 21:35:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.574 21:35:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.574 21:35:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:20.574 21:35:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:20.574 21:35:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.574 21:35:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.574 21:35:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.574 21:35:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.574 21:35:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.574 21:35:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.574 21:35:41 -- paths/export.sh@5 -- # export PATH 00:22:20.574 21:35:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.574 21:35:41 -- nvmf/common.sh@46 -- # : 0 00:22:20.574 21:35:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:20.574 21:35:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:20.574 21:35:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:20.574 21:35:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.574 21:35:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.574 21:35:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:20.574 21:35:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:20.574 21:35:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:20.574 21:35:41 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:20.574 21:35:41 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:20.574 21:35:41 -- target/nmic.sh@14 -- # nvmftestinit 00:22:20.574 21:35:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:20.574 21:35:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.574 21:35:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:20.574 21:35:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:20.574 21:35:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:20.574 21:35:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.574 21:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.574 21:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.574 21:35:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:20.574 21:35:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:20.574 21:35:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.574 21:35:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.574 21:35:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:20.574 21:35:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:20.574 21:35:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:20.574 21:35:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:20.574 21:35:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:20.574 21:35:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.574 21:35:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:20.574 21:35:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:20.574 21:35:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:20.574 21:35:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:20.574 21:35:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:20.574 21:35:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:20.574 Cannot find device "nvmf_tgt_br" 00:22:20.575 21:35:41 -- nvmf/common.sh@154 -- # true 00:22:20.575 21:35:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:20.575 Cannot find device "nvmf_tgt_br2" 00:22:20.575 21:35:41 -- nvmf/common.sh@155 -- # true 00:22:20.575 21:35:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:20.575 21:35:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:20.575 Cannot find device "nvmf_tgt_br" 00:22:20.575 21:35:41 -- nvmf/common.sh@157 -- # true 00:22:20.575 21:35:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:20.575 Cannot find device "nvmf_tgt_br2" 00:22:20.575 21:35:41 -- nvmf/common.sh@158 -- # true 00:22:20.575 21:35:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:20.575 21:35:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:20.575 21:35:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:20.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:20.575 21:35:41 -- nvmf/common.sh@161 -- # true 00:22:20.575 21:35:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:20.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:20.575 21:35:41 -- nvmf/common.sh@162 -- # true 00:22:20.575 21:35:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:20.575 21:35:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:20.575 21:35:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:20.575 21:35:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:20.575 21:35:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:20.575 21:35:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:20.575 21:35:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:20.575 21:35:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:20.835 21:35:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:20.835 21:35:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:20.835 21:35:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:20.835 21:35:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:20.836 21:35:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:20.836 21:35:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:20.836 21:35:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:20.836 21:35:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:20.836 21:35:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:20.836 21:35:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:20.836 21:35:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:20.836 21:35:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:20.836 21:35:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:20.836 21:35:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:20.836 21:35:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:20.836 21:35:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:20.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:22:20.836 00:22:20.836 --- 10.0.0.2 ping statistics --- 00:22:20.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.836 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:20.836 21:35:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:20.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:20.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:22:20.836 00:22:20.836 --- 10.0.0.3 ping statistics --- 00:22:20.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.836 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:20.836 21:35:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:20.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:20.836 00:22:20.836 --- 10.0.0.1 ping statistics --- 00:22:20.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.836 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:20.836 21:35:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.836 21:35:41 -- nvmf/common.sh@421 -- # return 0 00:22:20.836 21:35:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:20.836 21:35:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.836 21:35:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:20.836 21:35:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:20.836 21:35:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.836 21:35:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:20.836 21:35:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:20.836 21:35:41 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:22:20.836 21:35:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:20.836 21:35:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:20.836 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:22:20.836 21:35:41 -- nvmf/common.sh@469 -- # nvmfpid=75323 00:22:20.836 21:35:41 -- nvmf/common.sh@470 -- # waitforlisten 75323 00:22:20.836 21:35:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:20.836 21:35:41 -- common/autotest_common.sh@819 -- # '[' -z 75323 ']' 00:22:20.836 21:35:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.836 21:35:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:20.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.836 21:35:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.836 21:35:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:20.836 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:22:20.836 [2024-07-11 21:35:41.736352] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:20.836 [2024-07-11 21:35:41.736456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.102 [2024-07-11 21:35:41.878372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.102 [2024-07-11 21:35:41.983265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:21.102 [2024-07-11 21:35:41.983446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.102 [2024-07-11 21:35:41.983462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.102 [2024-07-11 21:35:41.983474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.102 [2024-07-11 21:35:41.983617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.102 [2024-07-11 21:35:41.983799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.102 [2024-07-11 21:35:41.983869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.102 [2024-07-11 21:35:41.983876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.037 21:35:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:22.037 21:35:42 -- common/autotest_common.sh@852 -- # return 0 00:22:22.037 21:35:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:22.037 21:35:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 21:35:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.037 21:35:42 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 [2024-07-11 21:35:42.817298] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 Malloc0 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 [2024-07-11 21:35:42.887325] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:22:22.037 test case1: single bdev can't be used in multiple subsystems 00:22:22.037 21:35:42 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@28 -- # nmic_status=0 00:22:22.037 21:35:42 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 [2024-07-11 21:35:42.915232] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:22:22.037 [2024-07-11 21:35:42.915295] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:22:22.037 [2024-07-11 21:35:42.915308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:22.037 request: 00:22:22.037 { 00:22:22.037 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:22:22.037 "namespace": { 00:22:22.037 "bdev_name": "Malloc0" 00:22:22.037 }, 00:22:22.037 "method": "nvmf_subsystem_add_ns", 00:22:22.037 "req_id": 1 00:22:22.037 } 00:22:22.037 Got JSON-RPC error response 00:22:22.037 response: 00:22:22.037 { 00:22:22.037 "code": -32602, 00:22:22.037 "message": "Invalid parameters" 00:22:22.037 } 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@29 -- # nmic_status=1 00:22:22.037 21:35:42 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:22:22.037 Adding namespace failed - expected result. 00:22:22.037 21:35:42 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:22:22.037 test case2: host connect to nvmf target in multiple paths 00:22:22.037 21:35:42 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:22:22.037 21:35:42 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:22.037 21:35:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.037 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:22:22.037 [2024-07-11 21:35:42.927401] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:22.037 21:35:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.037 21:35:42 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:22.295 21:35:43 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:22:22.295 21:35:43 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:22:22.295 21:35:43 -- common/autotest_common.sh@1177 -- # local i=0 00:22:22.295 21:35:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.295 21:35:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:22.295 21:35:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:24.835 21:35:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:24.835 21:35:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:24.835 21:35:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:24.835 21:35:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:24.835 21:35:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.835 21:35:45 -- common/autotest_common.sh@1187 -- # return 0 00:22:24.835 21:35:45 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:24.835 [global] 00:22:24.835 thread=1 00:22:24.835 invalidate=1 00:22:24.835 rw=write 00:22:24.835 time_based=1 00:22:24.835 runtime=1 00:22:24.835 ioengine=libaio 00:22:24.835 direct=1 00:22:24.835 bs=4096 00:22:24.835 iodepth=1 00:22:24.835 norandommap=0 00:22:24.835 numjobs=1 00:22:24.835 00:22:24.835 verify_dump=1 00:22:24.835 verify_backlog=512 00:22:24.835 verify_state_save=0 00:22:24.835 do_verify=1 00:22:24.835 verify=crc32c-intel 00:22:24.835 [job0] 00:22:24.835 filename=/dev/nvme0n1 00:22:24.835 Could not set queue depth (nvme0n1) 00:22:24.835 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:24.835 fio-3.35 00:22:24.835 Starting 1 thread 00:22:25.785 00:22:25.785 job0: (groupid=0, jobs=1): err= 0: pid=75417: Thu Jul 11 21:35:46 2024 00:22:25.785 read: IOPS=2961, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:22:25.785 slat (usec): min=12, max=670, avg=17.61, stdev=13.38 00:22:25.785 clat (usec): min=132, max=476, avg=178.35, stdev=19.09 00:22:25.785 lat (usec): min=155, max=871, avg=195.96, stdev=24.18 00:22:25.785 clat percentiles (usec): 00:22:25.785 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:22:25.785 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:22:25.785 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:22:25.785 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 367], 99.95th=[ 474], 00:22:25.785 | 99.99th=[ 478] 00:22:25.785 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:22:25.785 slat (usec): min=17, max=161, avg=23.16, stdev= 6.09 00:22:25.785 clat (usec): min=86, max=554, avg=109.27, stdev=14.97 00:22:25.785 lat (usec): min=106, max=578, avg=132.43, stdev=17.43 00:22:25.785 clat percentiles (usec): 00:22:25.785 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:22:25.785 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 111], 00:22:25.785 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 133], 00:22:25.785 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 202], 99.95th=[ 314], 00:22:25.785 | 99.99th=[ 553] 00:22:25.785 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:22:25.785 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:25.785 lat (usec) : 100=11.05%, 250=88.75%, 500=0.18%, 750=0.02% 00:22:25.785 cpu : usr=2.50%, sys=9.50%, ctx=6036, majf=0, minf=2 00:22:25.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:25.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.785 issued rwts: total=2964,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:25.785 00:22:25.785 Run status group 0 (all jobs): 00:22:25.785 READ: bw=11.6MiB/s (12.1MB/s), 11.6MiB/s-11.6MiB/s (12.1MB/s-12.1MB/s), io=11.6MiB (12.1MB), run=1001-1001msec 00:22:25.785 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:22:25.785 00:22:25.785 Disk stats (read/write): 00:22:25.785 nvme0n1: ios=2610/2890, merge=0/0, ticks=508/341, in_queue=849, util=91.38% 00:22:25.785 21:35:46 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:25.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:22:25.785 21:35:46 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:25.785 21:35:46 -- common/autotest_common.sh@1198 -- # local i=0 00:22:25.785 21:35:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:25.785 21:35:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:25.785 21:35:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:25.785 21:35:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:25.785 21:35:46 -- common/autotest_common.sh@1210 -- # return 0 00:22:25.785 21:35:46 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:25.785 21:35:46 -- target/nmic.sh@53 -- # nvmftestfini 00:22:25.785 21:35:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:25.785 21:35:46 -- nvmf/common.sh@116 -- # sync 00:22:25.785 21:35:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:25.785 21:35:46 -- nvmf/common.sh@119 -- # set +e 00:22:25.785 21:35:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:25.785 21:35:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:25.785 rmmod nvme_tcp 00:22:25.785 rmmod nvme_fabrics 00:22:25.785 rmmod nvme_keyring 00:22:25.785 21:35:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:25.785 21:35:46 -- nvmf/common.sh@123 -- # set -e 00:22:25.785 21:35:46 -- nvmf/common.sh@124 -- # return 0 00:22:25.785 21:35:46 -- nvmf/common.sh@477 -- # '[' -n 75323 ']' 00:22:25.785 21:35:46 -- nvmf/common.sh@478 -- # killprocess 75323 00:22:25.785 21:35:46 -- common/autotest_common.sh@926 -- # '[' -z 75323 ']' 00:22:25.785 21:35:46 -- common/autotest_common.sh@930 -- # kill -0 75323 00:22:25.785 21:35:46 -- common/autotest_common.sh@931 -- # uname 00:22:25.785 21:35:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:25.785 21:35:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75323 00:22:26.044 21:35:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:26.044 killing process with pid 75323 00:22:26.044 21:35:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:26.044 21:35:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75323' 00:22:26.044 21:35:46 -- common/autotest_common.sh@945 -- # kill 75323 00:22:26.044 21:35:46 -- common/autotest_common.sh@950 -- # wait 75323 00:22:26.304 21:35:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:26.304 21:35:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:26.304 21:35:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:26.304 21:35:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.304 21:35:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:26.304 21:35:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.304 21:35:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.304 21:35:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.304 21:35:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:26.304 00:22:26.304 real 0m5.806s 00:22:26.304 user 0m19.026s 00:22:26.304 sys 0m2.033s 00:22:26.304 21:35:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.304 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 ************************************ 00:22:26.304 END TEST nvmf_nmic 00:22:26.304 ************************************ 00:22:26.304 21:35:47 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:26.304 21:35:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:26.304 21:35:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:26.304 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 ************************************ 00:22:26.304 START TEST nvmf_fio_target 00:22:26.304 ************************************ 00:22:26.304 21:35:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:26.304 * Looking for test storage... 00:22:26.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:26.304 21:35:47 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.304 21:35:47 -- nvmf/common.sh@7 -- # uname -s 00:22:26.304 21:35:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.304 21:35:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.304 21:35:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.304 21:35:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.304 21:35:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.304 21:35:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.304 21:35:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.304 21:35:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.304 21:35:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.304 21:35:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.304 21:35:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:26.304 21:35:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:26.304 21:35:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.304 21:35:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.304 21:35:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.304 21:35:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.304 21:35:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.304 21:35:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.304 21:35:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.304 21:35:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.304 21:35:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.304 21:35:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.304 21:35:47 -- paths/export.sh@5 -- # export PATH 00:22:26.304 21:35:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.304 21:35:47 -- nvmf/common.sh@46 -- # : 0 00:22:26.304 21:35:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:26.304 21:35:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:26.304 21:35:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:26.304 21:35:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.304 21:35:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.304 21:35:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:26.304 21:35:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:26.304 21:35:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:26.304 21:35:47 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.304 21:35:47 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.304 21:35:47 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:26.304 21:35:47 -- target/fio.sh@16 -- # nvmftestinit 00:22:26.304 21:35:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:26.304 21:35:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.304 21:35:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:26.304 21:35:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:26.304 21:35:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:26.304 21:35:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.304 21:35:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.304 21:35:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.304 21:35:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:26.304 21:35:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:26.304 21:35:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:26.304 21:35:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:26.304 21:35:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:26.304 21:35:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:26.304 21:35:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.304 21:35:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.304 21:35:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:26.304 21:35:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:26.304 21:35:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.304 21:35:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.304 21:35:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.304 21:35:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.305 21:35:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.305 21:35:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.305 21:35:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.305 21:35:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.305 21:35:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:26.305 21:35:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:26.305 Cannot find device "nvmf_tgt_br" 00:22:26.305 21:35:47 -- nvmf/common.sh@154 -- # true 00:22:26.305 21:35:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.305 Cannot find device "nvmf_tgt_br2" 00:22:26.305 21:35:47 -- nvmf/common.sh@155 -- # true 00:22:26.305 21:35:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:26.305 21:35:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:26.305 Cannot find device "nvmf_tgt_br" 00:22:26.305 21:35:47 -- nvmf/common.sh@157 -- # true 00:22:26.305 21:35:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:26.305 Cannot find device "nvmf_tgt_br2" 00:22:26.305 21:35:47 -- nvmf/common.sh@158 -- # true 00:22:26.305 21:35:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:26.564 21:35:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:26.564 21:35:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.564 21:35:47 -- nvmf/common.sh@161 -- # true 00:22:26.564 21:35:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.564 21:35:47 -- nvmf/common.sh@162 -- # true 00:22:26.564 21:35:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.564 21:35:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.564 21:35:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.564 21:35:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.564 21:35:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.564 21:35:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.564 21:35:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.564 21:35:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:26.564 21:35:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:26.564 21:35:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:26.564 21:35:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:26.564 21:35:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:26.564 21:35:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:26.564 21:35:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.564 21:35:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.564 21:35:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.564 21:35:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:26.564 21:35:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:26.564 21:35:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.564 21:35:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.564 21:35:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.564 21:35:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.564 21:35:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.823 21:35:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:26.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:22:26.823 00:22:26.823 --- 10.0.0.2 ping statistics --- 00:22:26.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.823 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:26.823 21:35:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:26.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:26.823 00:22:26.823 --- 10.0.0.3 ping statistics --- 00:22:26.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.823 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:26.823 21:35:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:26.823 00:22:26.823 --- 10.0.0.1 ping statistics --- 00:22:26.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.823 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:26.823 21:35:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.823 21:35:47 -- nvmf/common.sh@421 -- # return 0 00:22:26.823 21:35:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:26.823 21:35:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.823 21:35:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:26.823 21:35:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:26.823 21:35:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.823 21:35:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:26.823 21:35:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:26.823 21:35:47 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:22:26.823 21:35:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:26.823 21:35:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:26.823 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.823 21:35:47 -- nvmf/common.sh@469 -- # nvmfpid=75594 00:22:26.823 21:35:47 -- nvmf/common.sh@470 -- # waitforlisten 75594 00:22:26.823 21:35:47 -- common/autotest_common.sh@819 -- # '[' -z 75594 ']' 00:22:26.823 21:35:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:26.823 21:35:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.823 21:35:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:26.823 21:35:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.823 21:35:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:26.823 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.823 [2024-07-11 21:35:47.612840] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:26.823 [2024-07-11 21:35:47.612956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.823 [2024-07-11 21:35:47.754669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.081 [2024-07-11 21:35:47.858948] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:27.081 [2024-07-11 21:35:47.859164] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.081 [2024-07-11 21:35:47.859184] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.081 [2024-07-11 21:35:47.859198] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.081 [2024-07-11 21:35:47.859370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.081 [2024-07-11 21:35:47.859538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.081 [2024-07-11 21:35:47.859660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.081 [2024-07-11 21:35:47.859669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.646 21:35:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:27.647 21:35:48 -- common/autotest_common.sh@852 -- # return 0 00:22:27.647 21:35:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:27.647 21:35:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:27.647 21:35:48 -- common/autotest_common.sh@10 -- # set +x 00:22:27.647 21:35:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.647 21:35:48 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:27.905 [2024-07-11 21:35:48.783068] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.905 21:35:48 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:28.471 21:35:49 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:22:28.471 21:35:49 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:28.471 21:35:49 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:22:28.471 21:35:49 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:28.793 21:35:49 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:22:28.793 21:35:49 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:29.066 21:35:49 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:22:29.066 21:35:49 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:22:29.325 21:35:50 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:29.583 21:35:50 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:22:29.583 21:35:50 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:29.841 21:35:50 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:22:29.841 21:35:50 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:30.099 21:35:51 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:22:30.099 21:35:51 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:22:30.357 21:35:51 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:30.616 21:35:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:30.616 21:35:51 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.874 21:35:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:30.875 21:35:51 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.133 21:35:51 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.391 [2024-07-11 21:35:52.135395] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.391 21:35:52 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:22:31.649 21:35:52 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:22:31.907 21:35:52 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:31.907 21:35:52 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:22:31.907 21:35:52 -- common/autotest_common.sh@1177 -- # local i=0 00:22:31.907 21:35:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:31.907 21:35:52 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:22:31.907 21:35:52 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:22:31.907 21:35:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:34.538 21:35:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:34.538 21:35:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:34.538 21:35:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:34.538 21:35:54 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:22:34.538 21:35:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:34.538 21:35:54 -- common/autotest_common.sh@1187 -- # return 0 00:22:34.538 21:35:54 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:34.538 [global] 00:22:34.538 thread=1 00:22:34.538 invalidate=1 00:22:34.538 rw=write 00:22:34.538 time_based=1 00:22:34.538 runtime=1 00:22:34.538 ioengine=libaio 00:22:34.538 direct=1 00:22:34.538 bs=4096 00:22:34.538 iodepth=1 00:22:34.538 norandommap=0 00:22:34.538 numjobs=1 00:22:34.538 00:22:34.538 verify_dump=1 00:22:34.538 verify_backlog=512 00:22:34.538 verify_state_save=0 00:22:34.538 do_verify=1 00:22:34.538 verify=crc32c-intel 00:22:34.538 [job0] 00:22:34.538 filename=/dev/nvme0n1 00:22:34.538 [job1] 00:22:34.538 filename=/dev/nvme0n2 00:22:34.538 [job2] 00:22:34.538 filename=/dev/nvme0n3 00:22:34.538 [job3] 00:22:34.538 filename=/dev/nvme0n4 00:22:34.538 Could not set queue depth (nvme0n1) 00:22:34.538 Could not set queue depth (nvme0n2) 00:22:34.538 Could not set queue depth (nvme0n3) 00:22:34.538 Could not set queue depth (nvme0n4) 00:22:34.538 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:34.538 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:34.538 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:34.538 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:34.538 fio-3.35 00:22:34.538 Starting 4 threads 00:22:35.479 00:22:35.479 job0: (groupid=0, jobs=1): err= 0: pid=75781: Thu Jul 11 21:35:56 2024 00:22:35.479 read: IOPS=2321, BW=9287KiB/s (9510kB/s)(9296KiB/1001msec) 00:22:35.479 slat (nsec): min=8303, max=31228, avg=12607.72, stdev=2436.06 00:22:35.479 clat (usec): min=135, max=1626, avg=218.51, stdev=65.06 00:22:35.479 lat (usec): min=147, max=1638, avg=231.12, stdev=64.81 00:22:35.479 clat percentiles (usec): 00:22:35.479 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:22:35.479 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 227], 60.00th=[ 239], 00:22:35.479 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 343], 00:22:35.479 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 400], 99.95th=[ 502], 00:22:35.479 | 99.99th=[ 1631] 00:22:35.479 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:22:35.479 slat (usec): min=10, max=104, avg=18.34, stdev= 4.41 00:22:35.479 clat (usec): min=95, max=1897, avg=159.37, stdev=53.04 00:22:35.479 lat (usec): min=115, max=1917, avg=177.71, stdev=51.81 00:22:35.479 clat percentiles (usec): 00:22:35.479 | 1.00th=[ 111], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 127], 00:22:35.479 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 147], 00:22:35.479 | 70.00th=[ 186], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 233], 00:22:35.479 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 277], 99.95th=[ 277], 00:22:35.479 | 99.99th=[ 1893] 00:22:35.479 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:22:35.479 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:35.479 lat (usec) : 100=0.08%, 250=85.40%, 500=14.48% 00:22:35.479 lat (msec) : 2=0.04% 00:22:35.479 cpu : usr=1.50%, sys=6.40%, ctx=4885, majf=0, minf=7 00:22:35.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.479 issued rwts: total=2324,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:35.479 job1: (groupid=0, jobs=1): err= 0: pid=75782: Thu Jul 11 21:35:56 2024 00:22:35.479 read: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:22:35.479 slat (nsec): min=11770, max=35671, avg=14720.01, stdev=2400.68 00:22:35.479 clat (usec): min=134, max=223, avg=165.10, stdev=12.16 00:22:35.479 lat (usec): min=146, max=243, avg=179.82, stdev=12.63 00:22:35.479 clat percentiles (usec): 00:22:35.479 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:22:35.479 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:22:35.479 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:22:35.479 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 221], 99.95th=[ 221], 00:22:35.479 | 99.99th=[ 225] 00:22:35.479 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:22:35.479 slat (usec): min=13, max=124, avg=21.08, stdev= 4.76 00:22:35.479 clat (usec): min=94, max=842, avg=127.62, stdev=18.78 00:22:35.479 lat (usec): min=112, max=878, avg=148.69, stdev=19.79 00:22:35.479 clat percentiles (usec): 00:22:35.479 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 119], 00:22:35.479 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 130], 00:22:35.479 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 147], 00:22:35.479 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 570], 00:22:35.479 | 99.99th=[ 840] 00:22:35.479 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:22:35.479 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:35.479 lat (usec) : 100=0.20%, 250=99.77%, 750=0.02%, 1000=0.02% 00:22:35.479 cpu : usr=2.30%, sys=8.40%, ctx=6032, majf=0, minf=7 00:22:35.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.479 issued rwts: total=2960,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:35.479 job2: (groupid=0, jobs=1): err= 0: pid=75783: Thu Jul 11 21:35:56 2024 00:22:35.479 read: IOPS=2192, BW=8771KiB/s (8982kB/s)(8780KiB/1001msec) 00:22:35.479 slat (usec): min=8, max=207, avg=13.83, stdev= 5.28 00:22:35.479 clat (usec): min=140, max=7653, avg=221.25, stdev=168.65 00:22:35.479 lat (usec): min=152, max=7666, avg=235.07, stdev=168.33 00:22:35.479 clat percentiles (usec): 00:22:35.479 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:22:35.479 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 227], 60.00th=[ 239], 00:22:35.479 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 326], 00:22:35.479 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 457], 99.95th=[ 1254], 00:22:35.479 | 99.99th=[ 7635] 00:22:35.479 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:22:35.479 slat (usec): min=11, max=166, avg=22.04, stdev= 6.64 00:22:35.479 clat (usec): min=107, max=3197, avg=163.57, stdev=82.45 00:22:35.479 lat (usec): min=130, max=3216, avg=185.61, stdev=82.20 00:22:35.479 clat percentiles (usec): 00:22:35.479 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:22:35.479 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 159], 00:22:35.479 | 70.00th=[ 188], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 229], 00:22:35.479 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 537], 99.95th=[ 2212], 00:22:35.479 | 99.99th=[ 3195] 00:22:35.479 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:22:35.479 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:35.479 lat (usec) : 250=87.28%, 500=12.60%, 750=0.04% 00:22:35.479 lat (msec) : 2=0.02%, 4=0.04%, 10=0.02% 00:22:35.479 cpu : usr=2.10%, sys=6.90%, ctx=4758, majf=0, minf=9 00:22:35.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.479 issued rwts: total=2195,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:35.479 job3: (groupid=0, jobs=1): err= 0: pid=75784: Thu Jul 11 21:35:56 2024 00:22:35.479 read: IOPS=2764, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:22:35.479 slat (usec): min=12, max=106, avg=15.04, stdev= 3.25 00:22:35.479 clat (usec): min=81, max=615, avg=171.30, stdev=16.88 00:22:35.479 lat (usec): min=154, max=631, avg=186.34, stdev=16.90 00:22:35.479 clat percentiles (usec): 00:22:35.479 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:22:35.479 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:22:35.479 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:22:35.479 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 265], 99.95th=[ 553], 00:22:35.480 | 99.99th=[ 619] 00:22:35.480 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:22:35.480 slat (nsec): min=13184, max=92300, avg=21177.92, stdev=3998.64 00:22:35.480 clat (usec): min=100, max=257, avg=133.16, stdev=11.95 00:22:35.480 lat (usec): min=120, max=278, avg=154.34, stdev=12.57 00:22:35.480 clat percentiles (usec): 00:22:35.480 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:22:35.480 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:22:35.480 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:22:35.480 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 221], 99.95th=[ 255], 00:22:35.480 | 99.99th=[ 258] 00:22:35.480 bw ( KiB/s): min=12312, max=12312, per=27.35%, avg=12312.00, stdev= 0.00, samples=1 00:22:35.480 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:22:35.480 lat (usec) : 100=0.02%, 250=99.90%, 500=0.05%, 750=0.03% 00:22:35.480 cpu : usr=2.30%, sys=8.30%, ctx=5839, majf=0, minf=12 00:22:35.480 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:35.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.480 issued rwts: total=2767,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.480 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:35.480 00:22:35.480 Run status group 0 (all jobs): 00:22:35.480 READ: bw=40.0MiB/s (41.9MB/s), 8771KiB/s-11.6MiB/s (8982kB/s-12.1MB/s), io=40.0MiB (42.0MB), run=1001-1001msec 00:22:35.480 WRITE: bw=44.0MiB/s (46.1MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.0MiB (46.1MB), run=1001-1001msec 00:22:35.480 00:22:35.480 Disk stats (read/write): 00:22:35.480 nvme0n1: ios=2098/2239, merge=0/0, ticks=457/353, in_queue=810, util=86.66% 00:22:35.480 nvme0n2: ios=2520/2560, merge=0/0, ticks=432/348, in_queue=780, util=86.52% 00:22:35.480 nvme0n3: ios=2037/2048, merge=0/0, ticks=435/319, in_queue=754, util=88.13% 00:22:35.480 nvme0n4: ios=2367/2560, merge=0/0, ticks=410/364, in_queue=774, util=89.52% 00:22:35.480 21:35:56 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:22:35.480 [global] 00:22:35.480 thread=1 00:22:35.480 invalidate=1 00:22:35.480 rw=randwrite 00:22:35.480 time_based=1 00:22:35.480 runtime=1 00:22:35.480 ioengine=libaio 00:22:35.480 direct=1 00:22:35.480 bs=4096 00:22:35.480 iodepth=1 00:22:35.480 norandommap=0 00:22:35.480 numjobs=1 00:22:35.480 00:22:35.480 verify_dump=1 00:22:35.480 verify_backlog=512 00:22:35.480 verify_state_save=0 00:22:35.480 do_verify=1 00:22:35.480 verify=crc32c-intel 00:22:35.480 [job0] 00:22:35.480 filename=/dev/nvme0n1 00:22:35.480 [job1] 00:22:35.480 filename=/dev/nvme0n2 00:22:35.480 [job2] 00:22:35.480 filename=/dev/nvme0n3 00:22:35.480 [job3] 00:22:35.480 filename=/dev/nvme0n4 00:22:35.480 Could not set queue depth (nvme0n1) 00:22:35.480 Could not set queue depth (nvme0n2) 00:22:35.480 Could not set queue depth (nvme0n3) 00:22:35.480 Could not set queue depth (nvme0n4) 00:22:35.738 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.738 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.738 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.738 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.738 fio-3.35 00:22:35.738 Starting 4 threads 00:22:37.120 00:22:37.120 job0: (groupid=0, jobs=1): err= 0: pid=75843: Thu Jul 11 21:35:57 2024 00:22:37.120 read: IOPS=1839, BW=7357KiB/s (7533kB/s)(7364KiB/1001msec) 00:22:37.120 slat (nsec): min=13678, max=77400, avg=17405.57, stdev=4590.14 00:22:37.120 clat (usec): min=154, max=2289, avg=270.22, stdev=78.27 00:22:37.120 lat (usec): min=173, max=2318, avg=287.63, stdev=80.08 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 184], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:22:37.120 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:22:37.120 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 355], 95.00th=[ 404], 00:22:37.120 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 1401], 99.95th=[ 2278], 00:22:37.120 | 99.99th=[ 2278] 00:22:37.120 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:22:37.120 slat (usec): min=17, max=132, avg=28.17, stdev= 8.27 00:22:37.120 clat (usec): min=98, max=1484, avg=197.10, stdev=75.34 00:22:37.120 lat (usec): min=124, max=1513, avg=225.27, stdev=79.74 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 106], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 131], 00:22:37.120 | 30.00th=[ 151], 40.00th=[ 172], 50.00th=[ 184], 60.00th=[ 196], 00:22:37.120 | 70.00th=[ 210], 80.00th=[ 260], 90.00th=[ 310], 95.00th=[ 343], 00:22:37.120 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[ 416], 99.95th=[ 529], 00:22:37.120 | 99.99th=[ 1483] 00:22:37.120 bw ( KiB/s): min= 8192, max= 8192, per=20.24%, avg=8192.00, stdev= 0.00, samples=1 00:22:37.120 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:22:37.120 lat (usec) : 100=0.05%, 250=64.08%, 500=35.64%, 750=0.15% 00:22:37.120 lat (msec) : 2=0.05%, 4=0.03% 00:22:37.120 cpu : usr=1.70%, sys=7.40%, ctx=3889, majf=0, minf=17 00:22:37.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 issued rwts: total=1841,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:37.120 job1: (groupid=0, jobs=1): err= 0: pid=75844: Thu Jul 11 21:35:57 2024 00:22:37.120 read: IOPS=1984, BW=7936KiB/s (8127kB/s)(7952KiB/1002msec) 00:22:37.120 slat (nsec): min=11828, max=76594, avg=17387.63, stdev=6847.19 00:22:37.120 clat (usec): min=142, max=2636, avg=283.54, stdev=93.66 00:22:37.120 lat (usec): min=161, max=2652, avg=300.93, stdev=97.10 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 202], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:22:37.120 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:22:37.120 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 445], 95.00th=[ 474], 00:22:37.120 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 603], 99.95th=[ 2638], 00:22:37.120 | 99.99th=[ 2638] 00:22:37.120 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:22:37.120 slat (usec): min=13, max=131, avg=22.17, stdev= 5.78 00:22:37.120 clat (usec): min=93, max=2545, avg=169.97, stdev=65.83 00:22:37.120 lat (usec): min=111, max=2573, avg=192.15, stdev=66.92 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 98], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 124], 00:22:37.120 | 30.00th=[ 143], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 188], 00:22:37.120 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 221], 00:22:37.120 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 347], 99.95th=[ 717], 00:22:37.120 | 99.99th=[ 2540] 00:22:37.120 bw ( KiB/s): min= 8096, max= 8288, per=20.24%, avg=8192.00, stdev=135.76, samples=2 00:22:37.120 iops : min= 2024, max= 2072, avg=2048.00, stdev=33.94, samples=2 00:22:37.120 lat (usec) : 100=0.97%, 250=70.47%, 500=27.68%, 750=0.84% 00:22:37.120 lat (msec) : 4=0.05% 00:22:37.120 cpu : usr=1.60%, sys=6.39%, ctx=4037, majf=0, minf=14 00:22:37.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 issued rwts: total=1988,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:37.120 job2: (groupid=0, jobs=1): err= 0: pid=75845: Thu Jul 11 21:35:57 2024 00:22:37.120 read: IOPS=2826, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:22:37.120 slat (usec): min=11, max=109, avg=13.93, stdev= 2.86 00:22:37.120 clat (usec): min=69, max=498, avg=171.31, stdev=15.89 00:22:37.120 lat (usec): min=151, max=512, avg=185.24, stdev=16.07 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:22:37.120 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:22:37.120 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:22:37.120 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 302], 99.95th=[ 424], 00:22:37.120 | 99.99th=[ 498] 00:22:37.120 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:22:37.120 slat (usec): min=12, max=128, avg=19.75, stdev= 4.29 00:22:37.120 clat (usec): min=97, max=479, avg=131.88, stdev=15.87 00:22:37.120 lat (usec): min=115, max=506, avg=151.64, stdev=17.06 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 105], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 122], 00:22:37.120 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:22:37.120 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:22:37.120 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 293], 99.95th=[ 383], 00:22:37.120 | 99.99th=[ 482] 00:22:37.120 bw ( KiB/s): min=12288, max=12288, per=30.36%, avg=12288.00, stdev= 0.00, samples=1 00:22:37.120 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:37.120 lat (usec) : 100=0.08%, 250=99.75%, 500=0.17% 00:22:37.120 cpu : usr=2.10%, sys=7.80%, ctx=5904, majf=0, minf=13 00:22:37.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 issued rwts: total=2829,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:37.120 job3: (groupid=0, jobs=1): err= 0: pid=75846: Thu Jul 11 21:35:57 2024 00:22:37.120 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:22:37.120 slat (nsec): min=12370, max=70755, avg=18694.04, stdev=6200.55 00:22:37.120 clat (usec): min=130, max=811, avg=178.78, stdev=23.63 00:22:37.120 lat (usec): min=153, max=828, avg=197.47, stdev=24.79 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:22:37.120 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:22:37.120 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:22:37.120 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 474], 99.95th=[ 627], 00:22:37.120 | 99.99th=[ 816] 00:22:37.120 write: IOPS=2967, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:22:37.120 slat (usec): min=15, max=515, avg=27.83, stdev=12.08 00:22:37.120 clat (usec): min=99, max=878, avg=134.42, stdev=19.64 00:22:37.120 lat (usec): min=123, max=915, avg=162.25, stdev=23.60 00:22:37.120 clat percentiles (usec): 00:22:37.120 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:22:37.120 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:22:37.120 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:22:37.120 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 363], 99.95th=[ 404], 00:22:37.120 | 99.99th=[ 881] 00:22:37.120 bw ( KiB/s): min=12312, max=12312, per=30.42%, avg=12312.00, stdev= 0.00, samples=1 00:22:37.120 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:22:37.120 lat (usec) : 100=0.02%, 250=99.60%, 500=0.33%, 750=0.02%, 1000=0.04% 00:22:37.120 cpu : usr=2.70%, sys=10.20%, ctx=5533, majf=0, minf=3 00:22:37.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.120 issued rwts: total=2560,2970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:37.120 00:22:37.120 Run status group 0 (all jobs): 00:22:37.120 READ: bw=35.9MiB/s (37.7MB/s), 7357KiB/s-11.0MiB/s (7533kB/s-11.6MB/s), io=36.0MiB (37.8MB), run=1001-1002msec 00:22:37.120 WRITE: bw=39.5MiB/s (41.4MB/s), 8176KiB/s-12.0MiB/s (8372kB/s-12.6MB/s), io=39.6MiB (41.5MB), run=1001-1002msec 00:22:37.120 00:22:37.120 Disk stats (read/write): 00:22:37.120 nvme0n1: ios=1586/1620, merge=0/0, ticks=460/358, in_queue=818, util=85.77% 00:22:37.120 nvme0n2: ios=1536/1980, merge=0/0, ticks=432/355, in_queue=787, util=85.51% 00:22:37.120 nvme0n3: ios=2377/2560, merge=0/0, ticks=412/354, in_queue=766, util=88.60% 00:22:37.120 nvme0n4: ios=2050/2560, merge=0/0, ticks=380/375, in_queue=755, util=89.54% 00:22:37.121 21:35:57 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:22:37.121 [global] 00:22:37.121 thread=1 00:22:37.121 invalidate=1 00:22:37.121 rw=write 00:22:37.121 time_based=1 00:22:37.121 runtime=1 00:22:37.121 ioengine=libaio 00:22:37.121 direct=1 00:22:37.121 bs=4096 00:22:37.121 iodepth=128 00:22:37.121 norandommap=0 00:22:37.121 numjobs=1 00:22:37.121 00:22:37.121 verify_dump=1 00:22:37.121 verify_backlog=512 00:22:37.121 verify_state_save=0 00:22:37.121 do_verify=1 00:22:37.121 verify=crc32c-intel 00:22:37.121 [job0] 00:22:37.121 filename=/dev/nvme0n1 00:22:37.121 [job1] 00:22:37.121 filename=/dev/nvme0n2 00:22:37.121 [job2] 00:22:37.121 filename=/dev/nvme0n3 00:22:37.121 [job3] 00:22:37.121 filename=/dev/nvme0n4 00:22:37.121 Could not set queue depth (nvme0n1) 00:22:37.121 Could not set queue depth (nvme0n2) 00:22:37.121 Could not set queue depth (nvme0n3) 00:22:37.121 Could not set queue depth (nvme0n4) 00:22:37.121 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:37.121 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:37.121 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:37.121 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:37.121 fio-3.35 00:22:37.121 Starting 4 threads 00:22:38.505 00:22:38.505 job0: (groupid=0, jobs=1): err= 0: pid=75904: Thu Jul 11 21:35:59 2024 00:22:38.505 read: IOPS=3309, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1006msec) 00:22:38.505 slat (usec): min=9, max=11874, avg=157.36, stdev=883.11 00:22:38.505 clat (usec): min=2677, max=48206, avg=20553.03, stdev=7166.77 00:22:38.505 lat (usec): min=5579, max=48223, avg=20710.38, stdev=7165.14 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[ 9241], 5.00th=[12911], 10.00th=[13698], 20.00th=[14222], 00:22:38.505 | 30.00th=[16057], 40.00th=[17957], 50.00th=[19792], 60.00th=[21627], 00:22:38.505 | 70.00th=[22414], 80.00th=[24511], 90.00th=[29492], 95.00th=[35914], 00:22:38.505 | 99.00th=[43254], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:22:38.505 | 99.99th=[47973] 00:22:38.505 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:22:38.505 slat (usec): min=11, max=9874, avg=126.19, stdev=654.41 00:22:38.505 clat (usec): min=8992, max=30464, avg=16075.82, stdev=4523.23 00:22:38.505 lat (usec): min=11376, max=30492, avg=16202.01, stdev=4515.78 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[10028], 5.00th=[11469], 10.00th=[11863], 20.00th=[12125], 00:22:38.505 | 30.00th=[13304], 40.00th=[15008], 50.00th=[15401], 60.00th=[15664], 00:22:38.505 | 70.00th=[16712], 80.00th=[18220], 90.00th=[22676], 95.00th=[27919], 00:22:38.505 | 99.00th=[28967], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:22:38.505 | 99.99th=[30540] 00:22:38.505 bw ( KiB/s): min=12288, max=16416, per=26.90%, avg=14352.00, stdev=2918.94, samples=2 00:22:38.505 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:22:38.505 lat (msec) : 4=0.01%, 10=1.45%, 20=68.26%, 50=30.28% 00:22:38.505 cpu : usr=2.19%, sys=10.15%, ctx=225, majf=0, minf=7 00:22:38.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:38.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:38.505 issued rwts: total=3329,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:38.505 job1: (groupid=0, jobs=1): err= 0: pid=75905: Thu Jul 11 21:35:59 2024 00:22:38.505 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec) 00:22:38.505 slat (usec): min=5, max=8294, avg=323.04, stdev=1115.89 00:22:38.505 clat (usec): min=23508, max=65971, avg=39423.44, stdev=6849.68 00:22:38.505 lat (usec): min=23531, max=65995, avg=39746.48, stdev=6911.92 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[24249], 5.00th=[28181], 10.00th=[29754], 20.00th=[32900], 00:22:38.505 | 30.00th=[35390], 40.00th=[38011], 50.00th=[40109], 60.00th=[42206], 00:22:38.505 | 70.00th=[43779], 80.00th=[45351], 90.00th=[46924], 95.00th=[48497], 00:22:38.505 | 99.00th=[57410], 99.50th=[62129], 99.90th=[64750], 99.95th=[65799], 00:22:38.505 | 99.99th=[65799] 00:22:38.505 write: IOPS=1940, BW=7762KiB/s (7948kB/s)(7832KiB/1009msec); 0 zone resets 00:22:38.505 slat (usec): min=10, max=9804, avg=250.42, stdev=960.23 00:22:38.505 clat (usec): min=4720, max=80506, avg=34031.23, stdev=16802.37 00:22:38.505 lat (usec): min=9821, max=80543, avg=34281.66, stdev=16892.75 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[16188], 5.00th=[19792], 10.00th=[20579], 20.00th=[21627], 00:22:38.505 | 30.00th=[23200], 40.00th=[24249], 50.00th=[27657], 60.00th=[30016], 00:22:38.505 | 70.00th=[34341], 80.00th=[48497], 90.00th=[67634], 95.00th=[72877], 00:22:38.505 | 99.00th=[76022], 99.50th=[76022], 99.90th=[79168], 99.95th=[80217], 00:22:38.505 | 99.99th=[80217] 00:22:38.505 bw ( KiB/s): min= 6672, max= 7968, per=13.72%, avg=7320.00, stdev=916.41, samples=2 00:22:38.505 iops : min= 1668, max= 1992, avg=1830.00, stdev=229.10, samples=2 00:22:38.505 lat (msec) : 10=0.14%, 20=2.80%, 50=85.46%, 100=11.59% 00:22:38.505 cpu : usr=2.38%, sys=5.75%, ctx=477, majf=0, minf=7 00:22:38.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:22:38.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:38.505 issued rwts: total=1536,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:38.505 job2: (groupid=0, jobs=1): err= 0: pid=75906: Thu Jul 11 21:35:59 2024 00:22:38.505 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:22:38.505 slat (usec): min=4, max=4650, avg=77.60, stdev=394.00 00:22:38.505 clat (usec): min=6091, max=15761, avg=10293.66, stdev=1127.15 00:22:38.505 lat (usec): min=6103, max=15793, avg=10371.27, stdev=1163.22 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:22:38.505 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:22:38.505 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:22:38.505 | 99.00th=[13960], 99.50th=[14615], 99.90th=[15139], 99.95th=[15139], 00:22:38.505 | 99.99th=[15795] 00:22:38.505 write: IOPS=6226, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1002msec); 0 zone resets 00:22:38.505 slat (usec): min=9, max=4441, avg=76.32, stdev=395.32 00:22:38.505 clat (usec): min=140, max=15902, avg=10187.34, stdev=1180.19 00:22:38.505 lat (usec): min=3493, max=15919, avg=10263.66, stdev=1235.34 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[ 6128], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9765], 00:22:38.505 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:22:38.505 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11076], 95.00th=[11731], 00:22:38.505 | 99.00th=[14091], 99.50th=[14877], 99.90th=[15401], 99.95th=[15533], 00:22:38.505 | 99.99th=[15926] 00:22:38.505 bw ( KiB/s): min=24576, max=24625, per=46.11%, avg=24600.50, stdev=34.65, samples=2 00:22:38.505 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 00:22:38.505 lat (usec) : 250=0.01% 00:22:38.505 lat (msec) : 4=0.22%, 10=36.45%, 20=63.33% 00:22:38.505 cpu : usr=6.39%, sys=15.28%, ctx=487, majf=0, minf=6 00:22:38.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:38.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:38.505 issued rwts: total=6144,6239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:38.505 job3: (groupid=0, jobs=1): err= 0: pid=75907: Thu Jul 11 21:35:59 2024 00:22:38.505 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:22:38.505 slat (usec): min=5, max=12143, avg=305.89, stdev=1197.91 00:22:38.505 clat (usec): min=23535, max=61748, avg=38905.67, stdev=6692.69 00:22:38.505 lat (usec): min=23554, max=62312, avg=39211.56, stdev=6738.04 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[25822], 5.00th=[29492], 10.00th=[31327], 20.00th=[32900], 00:22:38.505 | 30.00th=[34341], 40.00th=[36439], 50.00th=[37487], 60.00th=[40109], 00:22:38.505 | 70.00th=[42730], 80.00th=[45351], 90.00th=[47973], 95.00th=[50594], 00:22:38.505 | 99.00th=[56886], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 00:22:38.505 | 99.99th=[61604] 00:22:38.505 write: IOPS=1666, BW=6668KiB/s (6828kB/s)(6708KiB/1006msec); 0 zone resets 00:22:38.505 slat (usec): min=9, max=9661, avg=306.86, stdev=1052.14 00:22:38.505 clat (usec): min=5434, max=79471, avg=39476.19, stdev=15796.77 00:22:38.505 lat (usec): min=5752, max=80088, avg=39783.05, stdev=15877.30 00:22:38.505 clat percentiles (usec): 00:22:38.505 | 1.00th=[ 8979], 5.00th=[19530], 10.00th=[22938], 20.00th=[23987], 00:22:38.505 | 30.00th=[32375], 40.00th=[34866], 50.00th=[36963], 60.00th=[38536], 00:22:38.505 | 70.00th=[42206], 80.00th=[49546], 90.00th=[66847], 95.00th=[72877], 00:22:38.505 | 99.00th=[76022], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:22:38.505 | 99.99th=[79168] 00:22:38.505 bw ( KiB/s): min= 5876, max= 6512, per=11.61%, avg=6194.00, stdev=449.72, samples=2 00:22:38.505 iops : min= 1469, max= 1628, avg=1548.50, stdev=112.43, samples=2 00:22:38.505 lat (msec) : 10=0.65%, 20=2.49%, 50=84.03%, 100=12.82% 00:22:38.505 cpu : usr=1.89%, sys=5.67%, ctx=452, majf=0, minf=15 00:22:38.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:22:38.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:38.506 issued rwts: total=1536,1677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:38.506 00:22:38.506 Run status group 0 (all jobs): 00:22:38.506 READ: bw=48.6MiB/s (50.9MB/s), 6089KiB/s-24.0MiB/s (6235kB/s-25.1MB/s), io=49.0MiB (51.4MB), run=1002-1009msec 00:22:38.506 WRITE: bw=52.1MiB/s (54.6MB/s), 6668KiB/s-24.3MiB/s (6828kB/s-25.5MB/s), io=52.6MiB (55.1MB), run=1002-1009msec 00:22:38.506 00:22:38.506 Disk stats (read/write): 00:22:38.506 nvme0n1: ios=2769/3072, merge=0/0, ticks=13817/10942, in_queue=24759, util=87.75% 00:22:38.506 nvme0n2: ios=1385/1536, merge=0/0, ticks=18693/15400, in_queue=34093, util=87.04% 00:22:38.506 nvme0n3: ios=5120/5352, merge=0/0, ticks=25063/23098, in_queue=48161, util=88.96% 00:22:38.506 nvme0n4: ios=1192/1536, merge=0/0, ticks=14912/19023, in_queue=33935, util=89.09% 00:22:38.506 21:35:59 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:22:38.506 [global] 00:22:38.506 thread=1 00:22:38.506 invalidate=1 00:22:38.506 rw=randwrite 00:22:38.506 time_based=1 00:22:38.506 runtime=1 00:22:38.506 ioengine=libaio 00:22:38.506 direct=1 00:22:38.506 bs=4096 00:22:38.506 iodepth=128 00:22:38.506 norandommap=0 00:22:38.506 numjobs=1 00:22:38.506 00:22:38.506 verify_dump=1 00:22:38.506 verify_backlog=512 00:22:38.506 verify_state_save=0 00:22:38.506 do_verify=1 00:22:38.506 verify=crc32c-intel 00:22:38.506 [job0] 00:22:38.506 filename=/dev/nvme0n1 00:22:38.506 [job1] 00:22:38.506 filename=/dev/nvme0n2 00:22:38.506 [job2] 00:22:38.506 filename=/dev/nvme0n3 00:22:38.506 [job3] 00:22:38.506 filename=/dev/nvme0n4 00:22:38.506 Could not set queue depth (nvme0n1) 00:22:38.506 Could not set queue depth (nvme0n2) 00:22:38.506 Could not set queue depth (nvme0n3) 00:22:38.506 Could not set queue depth (nvme0n4) 00:22:38.506 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:38.506 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:38.506 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:38.506 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:38.506 fio-3.35 00:22:38.506 Starting 4 threads 00:22:39.880 00:22:39.880 job0: (groupid=0, jobs=1): err= 0: pid=75962: Thu Jul 11 21:36:00 2024 00:22:39.880 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:22:39.880 slat (usec): min=7, max=23815, avg=129.54, stdev=1000.53 00:22:39.880 clat (usec): min=6350, max=49603, avg=17786.91, stdev=7560.91 00:22:39.880 lat (usec): min=6360, max=49638, avg=17916.45, stdev=7627.71 00:22:39.880 clat percentiles (usec): 00:22:39.880 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[10159], 20.00th=[10683], 00:22:39.880 | 30.00th=[11207], 40.00th=[14877], 50.00th=[15401], 60.00th=[19006], 00:22:39.880 | 70.00th=[21890], 80.00th=[22676], 90.00th=[29754], 95.00th=[34341], 00:22:39.880 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[45876], 00:22:39.880 | 99.99th=[49546] 00:22:39.880 write: IOPS=4171, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1001msec); 0 zone resets 00:22:39.880 slat (usec): min=6, max=15785, avg=105.44, stdev=674.92 00:22:39.880 clat (usec): min=745, max=34601, avg=12935.19, stdev=3579.20 00:22:39.880 lat (usec): min=763, max=34642, avg=13040.63, stdev=3550.63 00:22:39.880 clat percentiles (usec): 00:22:39.880 | 1.00th=[ 5473], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10159], 00:22:39.880 | 30.00th=[10421], 40.00th=[11076], 50.00th=[12387], 60.00th=[13435], 00:22:39.880 | 70.00th=[14222], 80.00th=[14746], 90.00th=[18220], 95.00th=[18482], 00:22:39.880 | 99.00th=[26346], 99.50th=[26608], 99.90th=[27132], 99.95th=[29230], 00:22:39.880 | 99.99th=[34341] 00:22:39.880 bw ( KiB/s): min=12288, max=12288, per=17.69%, avg=12288.00, stdev= 0.00, samples=1 00:22:39.880 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:39.880 lat (usec) : 750=0.01%, 1000=0.04% 00:22:39.880 lat (msec) : 10=9.90%, 20=69.91%, 50=20.14% 00:22:39.880 cpu : usr=2.80%, sys=10.70%, ctx=178, majf=0, minf=5 00:22:39.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:39.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.880 issued rwts: total=4096,4176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.880 job1: (groupid=0, jobs=1): err= 0: pid=75963: Thu Jul 11 21:36:00 2024 00:22:39.880 read: IOPS=5695, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1002msec) 00:22:39.880 slat (usec): min=8, max=3204, avg=81.21, stdev=332.77 00:22:39.880 clat (usec): min=386, max=13880, avg=10576.32, stdev=1190.33 00:22:39.880 lat (usec): min=2590, max=15388, avg=10657.53, stdev=1205.92 00:22:39.880 clat percentiles (usec): 00:22:39.880 | 1.00th=[ 6128], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:22:39.880 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:22:39.880 | 70.00th=[10945], 80.00th=[11469], 90.00th=[11994], 95.00th=[12256], 00:22:39.880 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13829], 99.95th=[13829], 00:22:39.880 | 99.99th=[13829] 00:22:39.881 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:22:39.881 slat (usec): min=11, max=3031, avg=79.77, stdev=344.72 00:22:39.881 clat (usec): min=7656, max=14235, avg=10786.54, stdev=788.54 00:22:39.881 lat (usec): min=7682, max=14256, avg=10866.31, stdev=850.53 00:22:39.881 clat percentiles (usec): 00:22:39.881 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:22:39.881 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:22:39.881 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[12518], 00:22:39.881 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14222], 99.95th=[14222], 00:22:39.881 | 99.99th=[14222] 00:22:39.881 bw ( KiB/s): min=24160, max=24576, per=35.08%, avg=24368.00, stdev=294.16, samples=2 00:22:39.881 iops : min= 6040, max= 6144, avg=6092.00, stdev=73.54, samples=2 00:22:39.881 lat (usec) : 500=0.01% 00:22:39.881 lat (msec) : 4=0.27%, 10=16.62%, 20=83.10% 00:22:39.881 cpu : usr=5.09%, sys=16.38%, ctx=533, majf=0, minf=3 00:22:39.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:39.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.881 issued rwts: total=5707,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.881 job2: (groupid=0, jobs=1): err= 0: pid=75964: Thu Jul 11 21:36:00 2024 00:22:39.881 read: IOPS=4542, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1007msec) 00:22:39.881 slat (usec): min=4, max=7483, avg=109.34, stdev=522.62 00:22:39.881 clat (usec): min=815, max=29233, avg=14308.60, stdev=4925.75 00:22:39.881 lat (usec): min=6199, max=29246, avg=14417.94, stdev=4936.11 00:22:39.881 clat percentiles (usec): 00:22:39.881 | 1.00th=[ 9372], 5.00th=[11469], 10.00th=[11600], 20.00th=[11863], 00:22:39.881 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:22:39.881 | 70.00th=[12387], 80.00th=[15533], 90.00th=[23987], 95.00th=[25297], 00:22:39.881 | 99.00th=[27657], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 00:22:39.881 | 99.99th=[29230] 00:22:39.881 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:22:39.881 slat (usec): min=10, max=6211, avg=100.53, stdev=448.43 00:22:39.881 clat (usec): min=9239, max=28226, avg=13284.79, stdev=3449.87 00:22:39.881 lat (usec): min=10592, max=28247, avg=13385.33, stdev=3450.34 00:22:39.881 clat percentiles (usec): 00:22:39.881 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:22:39.881 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:22:39.881 | 70.00th=[12387], 80.00th=[12649], 90.00th=[18744], 95.00th=[23462], 00:22:39.881 | 99.00th=[26608], 99.50th=[26870], 99.90th=[28181], 99.95th=[28181], 00:22:39.881 | 99.99th=[28181] 00:22:39.881 bw ( KiB/s): min=16120, max=20744, per=26.53%, avg=18432.00, stdev=3269.66, samples=2 00:22:39.881 iops : min= 4030, max= 5186, avg=4608.00, stdev=817.42, samples=2 00:22:39.881 lat (usec) : 1000=0.01% 00:22:39.881 lat (msec) : 10=2.21%, 20=84.80%, 50=12.98% 00:22:39.881 cpu : usr=4.17%, sys=12.82%, ctx=367, majf=0, minf=10 00:22:39.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:39.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.881 issued rwts: total=4574,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.881 job3: (groupid=0, jobs=1): err= 0: pid=75965: Thu Jul 11 21:36:00 2024 00:22:39.881 read: IOPS=2332, BW=9328KiB/s (9552kB/s)(9356KiB/1003msec) 00:22:39.881 slat (usec): min=5, max=8707, avg=199.59, stdev=933.13 00:22:39.881 clat (usec): min=312, max=49220, avg=26458.61, stdev=7379.09 00:22:39.881 lat (usec): min=4987, max=49274, avg=26658.20, stdev=7429.12 00:22:39.881 clat percentiles (usec): 00:22:39.881 | 1.00th=[ 5407], 5.00th=[18744], 10.00th=[21365], 20.00th=[21627], 00:22:39.881 | 30.00th=[22152], 40.00th=[22938], 50.00th=[23987], 60.00th=[25560], 00:22:39.881 | 70.00th=[28181], 80.00th=[32900], 90.00th=[39584], 95.00th=[41681], 00:22:39.881 | 99.00th=[42730], 99.50th=[43779], 99.90th=[48497], 99.95th=[49021], 00:22:39.881 | 99.99th=[49021] 00:22:39.881 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:22:39.881 slat (usec): min=13, max=13710, avg=199.85, stdev=979.75 00:22:39.881 clat (usec): min=11258, max=72732, avg=25147.29, stdev=11524.33 00:22:39.881 lat (usec): min=11299, max=72754, avg=25347.14, stdev=11601.78 00:22:39.881 clat percentiles (usec): 00:22:39.881 | 1.00th=[11731], 5.00th=[12256], 10.00th=[13566], 20.00th=[17433], 00:22:39.881 | 30.00th=[19792], 40.00th=[21365], 50.00th=[22152], 60.00th=[24249], 00:22:39.881 | 70.00th=[27132], 80.00th=[28181], 90.00th=[36439], 95.00th=[53216], 00:22:39.881 | 99.00th=[68682], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:22:39.881 | 99.99th=[72877] 00:22:39.881 bw ( KiB/s): min= 8200, max=12280, per=14.74%, avg=10240.00, stdev=2885.00, samples=2 00:22:39.881 iops : min= 2050, max= 3070, avg=2560.00, stdev=721.25, samples=2 00:22:39.881 lat (usec) : 500=0.02% 00:22:39.881 lat (msec) : 10=0.65%, 20=18.41%, 50=77.91%, 100=3.00% 00:22:39.881 cpu : usr=1.90%, sys=8.18%, ctx=298, majf=0, minf=15 00:22:39.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:39.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.881 issued rwts: total=2339,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.881 00:22:39.881 Run status group 0 (all jobs): 00:22:39.881 READ: bw=64.8MiB/s (68.0MB/s), 9328KiB/s-22.2MiB/s (9552kB/s-23.3MB/s), io=65.3MiB (68.5MB), run=1001-1007msec 00:22:39.881 WRITE: bw=67.8MiB/s (71.1MB/s), 9.97MiB/s-24.0MiB/s (10.5MB/s-25.1MB/s), io=68.3MiB (71.6MB), run=1001-1007msec 00:22:39.881 00:22:39.881 Disk stats (read/write): 00:22:39.881 nvme0n1: ios=3122/3456, merge=0/0, ticks=59395/43954, in_queue=103349, util=88.68% 00:22:39.881 nvme0n2: ios=5122/5120, merge=0/0, ticks=16801/15569, in_queue=32370, util=89.59% 00:22:39.881 nvme0n3: ios=4113/4378, merge=0/0, ticks=12089/11768, in_queue=23857, util=89.40% 00:22:39.881 nvme0n4: ios=2085/2085, merge=0/0, ticks=16608/16761, in_queue=33369, util=90.37% 00:22:39.881 21:36:00 -- target/fio.sh@55 -- # sync 00:22:39.881 21:36:00 -- target/fio.sh@59 -- # fio_pid=75987 00:22:39.881 21:36:00 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:22:39.881 21:36:00 -- target/fio.sh@61 -- # sleep 3 00:22:39.881 [global] 00:22:39.881 thread=1 00:22:39.881 invalidate=1 00:22:39.881 rw=read 00:22:39.881 time_based=1 00:22:39.881 runtime=10 00:22:39.881 ioengine=libaio 00:22:39.881 direct=1 00:22:39.881 bs=4096 00:22:39.881 iodepth=1 00:22:39.881 norandommap=1 00:22:39.881 numjobs=1 00:22:39.881 00:22:39.881 [job0] 00:22:39.881 filename=/dev/nvme0n1 00:22:39.881 [job1] 00:22:39.881 filename=/dev/nvme0n2 00:22:39.881 [job2] 00:22:39.881 filename=/dev/nvme0n3 00:22:39.881 [job3] 00:22:39.881 filename=/dev/nvme0n4 00:22:39.881 Could not set queue depth (nvme0n1) 00:22:39.881 Could not set queue depth (nvme0n2) 00:22:39.881 Could not set queue depth (nvme0n3) 00:22:39.881 Could not set queue depth (nvme0n4) 00:22:39.882 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.882 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.882 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.882 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.882 fio-3.35 00:22:39.882 Starting 4 threads 00:22:42.542 21:36:03 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:22:42.800 fio: pid=76030, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:42.800 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=64970752, buflen=4096 00:22:43.057 21:36:03 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:43.057 fio: pid=76029, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:43.057 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=52822016, buflen=4096 00:22:43.314 21:36:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:43.314 21:36:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:43.314 fio: pid=76027, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:43.314 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=58818560, buflen=4096 00:22:43.571 21:36:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:43.571 21:36:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:43.829 fio: pid=76028, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:43.829 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20045824, buflen=4096 00:22:43.829 00:22:43.829 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76027: Thu Jul 11 21:36:04 2024 00:22:43.829 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(56.1MiB/3421msec) 00:22:43.829 slat (usec): min=7, max=15427, avg=15.42, stdev=183.36 00:22:43.829 clat (usec): min=122, max=3327, avg=221.41, stdev=59.48 00:22:43.829 lat (usec): min=135, max=15612, avg=236.82, stdev=192.73 00:22:43.829 clat percentiles (usec): 00:22:43.829 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 167], 00:22:43.829 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:22:43.829 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:22:43.829 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 537], 99.95th=[ 1045], 00:22:43.829 | 99.99th=[ 2507] 00:22:43.829 bw ( KiB/s): min=14888, max=21344, per=23.91%, avg=16430.67, stdev=2423.50, samples=6 00:22:43.829 iops : min= 3722, max= 5336, avg=4107.67, stdev=605.87, samples=6 00:22:43.829 lat (usec) : 250=76.61%, 500=23.28%, 750=0.02%, 1000=0.01% 00:22:43.829 lat (msec) : 2=0.06%, 4=0.01% 00:22:43.829 cpu : usr=1.37%, sys=4.85%, ctx=14369, majf=0, minf=1 00:22:43.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 issued rwts: total=14361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:43.829 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76028: Thu Jul 11 21:36:04 2024 00:22:43.829 read: IOPS=5677, BW=22.2MiB/s (23.3MB/s)(83.1MiB/3748msec) 00:22:43.829 slat (usec): min=10, max=8854, avg=16.46, stdev=126.21 00:22:43.829 clat (usec): min=108, max=4048, avg=158.20, stdev=40.12 00:22:43.829 lat (usec): min=134, max=9065, avg=174.66, stdev=132.95 00:22:43.829 clat percentiles (usec): 00:22:43.829 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:22:43.829 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:22:43.829 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:22:43.829 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 212], 99.95th=[ 293], 00:22:43.829 | 99.99th=[ 1319] 00:22:43.829 bw ( KiB/s): min=21604, max=23384, per=32.93%, avg=22634.29, stdev=626.61, samples=7 00:22:43.829 iops : min= 5401, max= 5846, avg=5658.57, stdev=156.65, samples=7 00:22:43.829 lat (usec) : 250=99.93%, 500=0.04%, 750=0.01% 00:22:43.829 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:22:43.829 cpu : usr=1.79%, sys=7.39%, ctx=21291, majf=0, minf=1 00:22:43.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 issued rwts: total=21279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:43.829 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76029: Thu Jul 11 21:36:04 2024 00:22:43.829 read: IOPS=4041, BW=15.8MiB/s (16.6MB/s)(50.4MiB/3191msec) 00:22:43.829 slat (usec): min=7, max=11340, avg=14.63, stdev=121.61 00:22:43.829 clat (usec): min=3, max=7527, avg=231.45, stdev=83.11 00:22:43.829 lat (usec): min=150, max=11572, avg=246.08, stdev=146.56 00:22:43.829 clat percentiles (usec): 00:22:43.829 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 212], 00:22:43.829 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:22:43.829 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:22:43.829 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 816], 99.95th=[ 1500], 00:22:43.829 | 99.99th=[ 2573] 00:22:43.829 bw ( KiB/s): min=15192, max=19616, per=23.56%, avg=16193.33, stdev=1684.48, samples=6 00:22:43.829 iops : min= 3798, max= 4904, avg=4048.33, stdev=421.12, samples=6 00:22:43.829 lat (usec) : 4=0.01%, 10=0.01%, 250=73.05%, 500=26.80%, 750=0.02% 00:22:43.829 lat (usec) : 1000=0.02% 00:22:43.829 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01% 00:22:43.829 cpu : usr=1.41%, sys=4.76%, ctx=12905, majf=0, minf=1 00:22:43.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 issued rwts: total=12897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:43.829 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76030: Thu Jul 11 21:36:04 2024 00:22:43.829 read: IOPS=5410, BW=21.1MiB/s (22.2MB/s)(62.0MiB/2932msec) 00:22:43.829 slat (usec): min=11, max=102, avg=14.09, stdev= 2.31 00:22:43.829 clat (usec): min=136, max=2723, avg=169.38, stdev=28.30 00:22:43.829 lat (usec): min=148, max=2737, avg=183.46, stdev=28.51 00:22:43.829 clat percentiles (usec): 00:22:43.829 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:22:43.829 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:22:43.829 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:22:43.829 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 306], 99.95th=[ 404], 00:22:43.829 | 99.99th=[ 1172] 00:22:43.829 bw ( KiB/s): min=21421, max=21832, per=31.55%, avg=21682.60, stdev=169.50, samples=5 00:22:43.829 iops : min= 5355, max= 5458, avg=5420.60, stdev=42.47, samples=5 00:22:43.829 lat (usec) : 250=99.80%, 500=0.16%, 750=0.01%, 1000=0.01% 00:22:43.829 lat (msec) : 2=0.01%, 4=0.01% 00:22:43.829 cpu : usr=1.47%, sys=6.45%, ctx=15869, majf=0, minf=1 00:22:43.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:43.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.829 issued rwts: total=15863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:43.829 00:22:43.829 Run status group 0 (all jobs): 00:22:43.829 READ: bw=67.1MiB/s (70.4MB/s), 15.8MiB/s-22.2MiB/s (16.6MB/s-23.3MB/s), io=252MiB (264MB), run=2932-3748msec 00:22:43.829 00:22:43.829 Disk stats (read/write): 00:22:43.829 nvme0n1: ios=13971/0, merge=0/0, ticks=3009/0, in_queue=3009, util=94.85% 00:22:43.829 nvme0n2: ios=20372/0, merge=0/0, ticks=3296/0, in_queue=3296, util=95.47% 00:22:43.829 nvme0n3: ios=12554/0, merge=0/0, ticks=2795/0, in_queue=2795, util=95.89% 00:22:43.829 nvme0n4: ios=15457/0, merge=0/0, ticks=2670/0, in_queue=2670, util=96.78% 00:22:43.829 21:36:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:43.829 21:36:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:44.087 21:36:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:44.087 21:36:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:22:44.344 21:36:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:44.344 21:36:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:22:44.603 21:36:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:44.603 21:36:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:22:44.860 21:36:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:44.860 21:36:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:22:45.117 21:36:05 -- target/fio.sh@69 -- # fio_status=0 00:22:45.117 21:36:05 -- target/fio.sh@70 -- # wait 75987 00:22:45.117 21:36:05 -- target/fio.sh@70 -- # fio_status=4 00:22:45.117 21:36:05 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:45.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:45.117 21:36:05 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:45.117 21:36:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:45.117 21:36:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:45.117 21:36:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:45.117 21:36:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:45.117 21:36:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:45.117 nvmf hotplug test: fio failed as expected 00:22:45.117 21:36:06 -- common/autotest_common.sh@1210 -- # return 0 00:22:45.117 21:36:06 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:22:45.117 21:36:06 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:22:45.117 21:36:06 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.380 21:36:06 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:45.381 21:36:06 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:45.381 21:36:06 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:45.381 21:36:06 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:45.381 21:36:06 -- target/fio.sh@91 -- # nvmftestfini 00:22:45.381 21:36:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:45.381 21:36:06 -- nvmf/common.sh@116 -- # sync 00:22:45.381 21:36:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:45.381 21:36:06 -- nvmf/common.sh@119 -- # set +e 00:22:45.381 21:36:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:45.381 21:36:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:45.381 rmmod nvme_tcp 00:22:45.381 rmmod nvme_fabrics 00:22:45.381 rmmod nvme_keyring 00:22:45.381 21:36:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:45.381 21:36:06 -- nvmf/common.sh@123 -- # set -e 00:22:45.381 21:36:06 -- nvmf/common.sh@124 -- # return 0 00:22:45.381 21:36:06 -- nvmf/common.sh@477 -- # '[' -n 75594 ']' 00:22:45.381 21:36:06 -- nvmf/common.sh@478 -- # killprocess 75594 00:22:45.381 21:36:06 -- common/autotest_common.sh@926 -- # '[' -z 75594 ']' 00:22:45.381 21:36:06 -- common/autotest_common.sh@930 -- # kill -0 75594 00:22:45.381 21:36:06 -- common/autotest_common.sh@931 -- # uname 00:22:45.381 21:36:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.381 21:36:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75594 00:22:45.665 killing process with pid 75594 00:22:45.665 21:36:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:45.665 21:36:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:45.665 21:36:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75594' 00:22:45.665 21:36:06 -- common/autotest_common.sh@945 -- # kill 75594 00:22:45.665 21:36:06 -- common/autotest_common.sh@950 -- # wait 75594 00:22:45.665 21:36:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:45.665 21:36:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:45.665 21:36:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:45.665 21:36:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.665 21:36:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:45.665 21:36:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.665 21:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.665 21:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.665 21:36:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:45.665 ************************************ 00:22:45.665 END TEST nvmf_fio_target 00:22:45.665 ************************************ 00:22:45.665 00:22:45.665 real 0m19.520s 00:22:45.665 user 1m13.462s 00:22:45.665 sys 0m10.548s 00:22:45.665 21:36:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.665 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:22:45.923 21:36:06 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:45.923 21:36:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:45.923 21:36:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:45.923 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:22:45.923 ************************************ 00:22:45.923 START TEST nvmf_bdevio 00:22:45.923 ************************************ 00:22:45.923 21:36:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:45.923 * Looking for test storage... 00:22:45.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:45.923 21:36:06 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:45.923 21:36:06 -- nvmf/common.sh@7 -- # uname -s 00:22:45.923 21:36:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.923 21:36:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.923 21:36:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.923 21:36:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.923 21:36:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.923 21:36:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.923 21:36:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.923 21:36:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.923 21:36:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.923 21:36:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.923 21:36:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:45.923 21:36:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:45.923 21:36:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.923 21:36:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.923 21:36:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:45.923 21:36:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:45.923 21:36:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.923 21:36:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.923 21:36:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.923 21:36:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.923 21:36:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.923 21:36:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.923 21:36:06 -- paths/export.sh@5 -- # export PATH 00:22:45.923 21:36:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.923 21:36:06 -- nvmf/common.sh@46 -- # : 0 00:22:45.923 21:36:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:45.923 21:36:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:45.923 21:36:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:45.923 21:36:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.923 21:36:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.923 21:36:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:45.923 21:36:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:45.923 21:36:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:45.923 21:36:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.923 21:36:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.923 21:36:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:22:45.923 21:36:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:45.923 21:36:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.923 21:36:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:45.923 21:36:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:45.923 21:36:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:45.923 21:36:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.923 21:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.923 21:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.923 21:36:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:45.923 21:36:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:45.923 21:36:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:45.923 21:36:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:45.923 21:36:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:45.923 21:36:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:45.923 21:36:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.923 21:36:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.923 21:36:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:45.923 21:36:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:45.923 21:36:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:45.923 21:36:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:45.923 21:36:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:45.923 21:36:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.923 21:36:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:45.923 21:36:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:45.923 21:36:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:45.923 21:36:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:45.923 21:36:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:45.923 21:36:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:45.923 Cannot find device "nvmf_tgt_br" 00:22:45.923 21:36:06 -- nvmf/common.sh@154 -- # true 00:22:45.923 21:36:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.923 Cannot find device "nvmf_tgt_br2" 00:22:45.923 21:36:06 -- nvmf/common.sh@155 -- # true 00:22:45.923 21:36:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:45.923 21:36:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:45.923 Cannot find device "nvmf_tgt_br" 00:22:45.923 21:36:06 -- nvmf/common.sh@157 -- # true 00:22:45.923 21:36:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:45.923 Cannot find device "nvmf_tgt_br2" 00:22:45.923 21:36:06 -- nvmf/common.sh@158 -- # true 00:22:45.923 21:36:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:45.923 21:36:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:46.181 21:36:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.181 21:36:06 -- nvmf/common.sh@161 -- # true 00:22:46.181 21:36:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.181 21:36:06 -- nvmf/common.sh@162 -- # true 00:22:46.181 21:36:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:46.181 21:36:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:46.181 21:36:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:46.181 21:36:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.181 21:36:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:46.181 21:36:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:46.181 21:36:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:46.181 21:36:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:46.181 21:36:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:46.181 21:36:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:46.181 21:36:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:46.181 21:36:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:46.181 21:36:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:46.181 21:36:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:46.182 21:36:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:46.182 21:36:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:46.182 21:36:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:46.182 21:36:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:46.182 21:36:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:46.182 21:36:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:46.182 21:36:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:46.182 21:36:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:46.182 21:36:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:46.182 21:36:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:46.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:22:46.182 00:22:46.182 --- 10.0.0.2 ping statistics --- 00:22:46.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.182 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:46.182 21:36:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:46.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:46.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:22:46.182 00:22:46.182 --- 10.0.0.3 ping statistics --- 00:22:46.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.182 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:46.182 21:36:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:46.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:46.182 00:22:46.182 --- 10.0.0.1 ping statistics --- 00:22:46.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.182 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:46.182 21:36:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.182 21:36:07 -- nvmf/common.sh@421 -- # return 0 00:22:46.182 21:36:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:46.182 21:36:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.182 21:36:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:46.182 21:36:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:46.182 21:36:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.182 21:36:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:46.182 21:36:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:46.182 21:36:07 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:46.182 21:36:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:46.182 21:36:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:46.182 21:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:46.182 21:36:07 -- nvmf/common.sh@469 -- # nvmfpid=76292 00:22:46.182 21:36:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:46.182 21:36:07 -- nvmf/common.sh@470 -- # waitforlisten 76292 00:22:46.182 21:36:07 -- common/autotest_common.sh@819 -- # '[' -z 76292 ']' 00:22:46.182 21:36:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.182 21:36:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:46.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.182 21:36:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.182 21:36:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:46.182 21:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:46.441 [2024-07-11 21:36:07.160582] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:46.441 [2024-07-11 21:36:07.160690] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.441 [2024-07-11 21:36:07.302165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.699 [2024-07-11 21:36:07.395805] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:46.699 [2024-07-11 21:36:07.396425] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.699 [2024-07-11 21:36:07.396582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.699 [2024-07-11 21:36:07.396666] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.699 [2024-07-11 21:36:07.396858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:46.699 [2024-07-11 21:36:07.396954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:46.699 [2024-07-11 21:36:07.397092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:46.699 [2024-07-11 21:36:07.397095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.266 21:36:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:47.266 21:36:08 -- common/autotest_common.sh@852 -- # return 0 00:22:47.266 21:36:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:47.266 21:36:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:47.266 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:47.266 21:36:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.266 21:36:08 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.266 21:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.266 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:47.266 [2024-07-11 21:36:08.142175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.266 21:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.266 21:36:08 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:47.266 21:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.266 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:47.266 Malloc0 00:22:47.266 21:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.266 21:36:08 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:47.266 21:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.266 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:47.266 21:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.266 21:36:08 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:47.266 21:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.266 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:47.266 21:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.266 21:36:08 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.266 21:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.266 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:47.266 [2024-07-11 21:36:08.208031] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.266 21:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.266 21:36:08 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:47.266 21:36:08 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:47.266 21:36:08 -- nvmf/common.sh@520 -- # config=() 00:22:47.266 21:36:08 -- nvmf/common.sh@520 -- # local subsystem config 00:22:47.266 21:36:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:47.266 21:36:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:47.266 { 00:22:47.266 "params": { 00:22:47.266 "name": "Nvme$subsystem", 00:22:47.266 "trtype": "$TEST_TRANSPORT", 00:22:47.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.266 "adrfam": "ipv4", 00:22:47.266 "trsvcid": "$NVMF_PORT", 00:22:47.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.266 "hdgst": ${hdgst:-false}, 00:22:47.266 "ddgst": ${ddgst:-false} 00:22:47.266 }, 00:22:47.266 "method": "bdev_nvme_attach_controller" 00:22:47.266 } 00:22:47.266 EOF 00:22:47.266 )") 00:22:47.524 21:36:08 -- nvmf/common.sh@542 -- # cat 00:22:47.524 21:36:08 -- nvmf/common.sh@544 -- # jq . 00:22:47.524 21:36:08 -- nvmf/common.sh@545 -- # IFS=, 00:22:47.524 21:36:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:47.524 "params": { 00:22:47.524 "name": "Nvme1", 00:22:47.524 "trtype": "tcp", 00:22:47.524 "traddr": "10.0.0.2", 00:22:47.524 "adrfam": "ipv4", 00:22:47.524 "trsvcid": "4420", 00:22:47.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.524 "hdgst": false, 00:22:47.524 "ddgst": false 00:22:47.524 }, 00:22:47.524 "method": "bdev_nvme_attach_controller" 00:22:47.524 }' 00:22:47.524 [2024-07-11 21:36:08.260327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:47.524 [2024-07-11 21:36:08.260427] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76328 ] 00:22:47.524 [2024-07-11 21:36:08.395559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:47.783 [2024-07-11 21:36:08.490455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.783 [2024-07-11 21:36:08.490526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.783 [2024-07-11 21:36:08.490530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.783 [2024-07-11 21:36:08.656216] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:22:47.784 [2024-07-11 21:36:08.656861] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:22:47.784 I/O targets: 00:22:47.784 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:47.784 00:22:47.784 00:22:47.784 CUnit - A unit testing framework for C - Version 2.1-3 00:22:47.784 http://cunit.sourceforge.net/ 00:22:47.784 00:22:47.784 00:22:47.784 Suite: bdevio tests on: Nvme1n1 00:22:47.784 Test: blockdev write read block ...passed 00:22:47.784 Test: blockdev write zeroes read block ...passed 00:22:47.784 Test: blockdev write zeroes read no split ...passed 00:22:47.784 Test: blockdev write zeroes read split ...passed 00:22:47.784 Test: blockdev write zeroes read split partial ...passed 00:22:47.784 Test: blockdev reset ...[2024-07-11 21:36:08.690086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.784 [2024-07-11 21:36:08.690212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1092350 (9): Bad file descriptor 00:22:47.784 passed 00:22:47.784 Test: blockdev write read 8 blocks ...[2024-07-11 21:36:08.704730] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:47.784 passed 00:22:47.784 Test: blockdev write read size > 128k ...passed 00:22:47.784 Test: blockdev write read invalid size ...passed 00:22:47.784 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:47.784 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:47.784 Test: blockdev write read max offset ...passed 00:22:47.784 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:47.784 Test: blockdev writev readv 8 blocks ...passed 00:22:47.784 Test: blockdev writev readv 30 x 1block ...passed 00:22:47.784 Test: blockdev writev readv block ...passed 00:22:47.784 Test: blockdev writev readv size > 128k ...passed 00:22:47.784 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:47.784 Test: blockdev comparev and writev ...[2024-07-11 21:36:08.713349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.713397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.713417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.713428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:47.784 passed 00:22:47.784 Test: blockdev nvme passthru rw ...[2024-07-11 21:36:08.713938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.713963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.713981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.713991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.714295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.714313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.714330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.714341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.714643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.714661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.714678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:47.784 [2024-07-11 21:36:08.714688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:47.784 passed 00:22:47.784 Test: blockdev nvme passthru vendor specific ...passed 00:22:47.784 Test: blockdev nvme admin passthru ...[2024-07-11 21:36:08.715369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:47.784 [2024-07-11 21:36:08.715393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.715694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:47.784 [2024-07-11 21:36:08.715715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.715829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:47.784 [2024-07-11 21:36:08.715846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:47.784 [2024-07-11 21:36:08.715950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:47.784 [2024-07-11 21:36:08.715966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:47.784 passed 00:22:47.784 Test: blockdev copy ...passed 00:22:47.784 00:22:47.784 Run Summary: Type Total Ran Passed Failed Inactive 00:22:47.784 suites 1 1 n/a 0 0 00:22:47.784 tests 23 23 23 0 0 00:22:47.784 asserts 152 152 152 0 n/a 00:22:47.784 00:22:47.784 Elapsed time = 0.150 seconds 00:22:48.043 21:36:08 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:48.043 21:36:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.043 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:48.043 21:36:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.043 21:36:08 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:48.043 21:36:08 -- target/bdevio.sh@30 -- # nvmftestfini 00:22:48.043 21:36:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:48.043 21:36:08 -- nvmf/common.sh@116 -- # sync 00:22:48.043 21:36:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:48.043 21:36:08 -- nvmf/common.sh@119 -- # set +e 00:22:48.043 21:36:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:48.043 21:36:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:48.043 rmmod nvme_tcp 00:22:48.300 rmmod nvme_fabrics 00:22:48.300 rmmod nvme_keyring 00:22:48.300 21:36:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:48.300 21:36:09 -- nvmf/common.sh@123 -- # set -e 00:22:48.300 21:36:09 -- nvmf/common.sh@124 -- # return 0 00:22:48.300 21:36:09 -- nvmf/common.sh@477 -- # '[' -n 76292 ']' 00:22:48.300 21:36:09 -- nvmf/common.sh@478 -- # killprocess 76292 00:22:48.300 21:36:09 -- common/autotest_common.sh@926 -- # '[' -z 76292 ']' 00:22:48.301 21:36:09 -- common/autotest_common.sh@930 -- # kill -0 76292 00:22:48.301 21:36:09 -- common/autotest_common.sh@931 -- # uname 00:22:48.301 21:36:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:48.301 21:36:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76292 00:22:48.301 killing process with pid 76292 00:22:48.301 21:36:09 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:22:48.301 21:36:09 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:22:48.301 21:36:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76292' 00:22:48.301 21:36:09 -- common/autotest_common.sh@945 -- # kill 76292 00:22:48.301 21:36:09 -- common/autotest_common.sh@950 -- # wait 76292 00:22:48.559 21:36:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:48.559 21:36:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:48.559 21:36:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:48.559 21:36:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.559 21:36:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:48.559 21:36:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.559 21:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.559 21:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.559 21:36:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:48.559 00:22:48.559 real 0m2.696s 00:22:48.559 user 0m8.805s 00:22:48.559 sys 0m0.750s 00:22:48.559 21:36:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.559 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:48.559 ************************************ 00:22:48.559 END TEST nvmf_bdevio 00:22:48.559 ************************************ 00:22:48.559 21:36:09 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:22:48.559 21:36:09 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:48.559 21:36:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:48.559 21:36:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:48.559 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:48.559 ************************************ 00:22:48.559 START TEST nvmf_bdevio_no_huge 00:22:48.559 ************************************ 00:22:48.559 21:36:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:48.559 * Looking for test storage... 00:22:48.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:48.559 21:36:09 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:48.559 21:36:09 -- nvmf/common.sh@7 -- # uname -s 00:22:48.559 21:36:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.559 21:36:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.559 21:36:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.559 21:36:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.559 21:36:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.559 21:36:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.559 21:36:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.559 21:36:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.559 21:36:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.559 21:36:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.559 21:36:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:48.559 21:36:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:48.559 21:36:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.559 21:36:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.559 21:36:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:48.559 21:36:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:48.559 21:36:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.559 21:36:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.559 21:36:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.559 21:36:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.559 21:36:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.559 21:36:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.559 21:36:09 -- paths/export.sh@5 -- # export PATH 00:22:48.559 21:36:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.559 21:36:09 -- nvmf/common.sh@46 -- # : 0 00:22:48.559 21:36:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:48.559 21:36:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:48.559 21:36:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:48.560 21:36:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.560 21:36:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.560 21:36:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:48.560 21:36:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:48.560 21:36:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:48.560 21:36:09 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.560 21:36:09 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.560 21:36:09 -- target/bdevio.sh@14 -- # nvmftestinit 00:22:48.560 21:36:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:48.560 21:36:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.560 21:36:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:48.560 21:36:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:48.560 21:36:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:48.560 21:36:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.560 21:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.560 21:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.560 21:36:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:48.560 21:36:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:48.560 21:36:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:48.560 21:36:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:48.560 21:36:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:48.560 21:36:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:48.560 21:36:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.560 21:36:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.560 21:36:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:48.560 21:36:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:48.560 21:36:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:48.560 21:36:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:48.560 21:36:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:48.560 21:36:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.560 21:36:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:48.560 21:36:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:48.560 21:36:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:48.560 21:36:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:48.560 21:36:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:48.819 21:36:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:48.819 Cannot find device "nvmf_tgt_br" 00:22:48.819 21:36:09 -- nvmf/common.sh@154 -- # true 00:22:48.819 21:36:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:48.819 Cannot find device "nvmf_tgt_br2" 00:22:48.819 21:36:09 -- nvmf/common.sh@155 -- # true 00:22:48.819 21:36:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:48.819 21:36:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:48.819 Cannot find device "nvmf_tgt_br" 00:22:48.819 21:36:09 -- nvmf/common.sh@157 -- # true 00:22:48.819 21:36:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:48.819 Cannot find device "nvmf_tgt_br2" 00:22:48.819 21:36:09 -- nvmf/common.sh@158 -- # true 00:22:48.819 21:36:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:48.819 21:36:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:48.819 21:36:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:48.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.819 21:36:09 -- nvmf/common.sh@161 -- # true 00:22:48.819 21:36:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:48.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.819 21:36:09 -- nvmf/common.sh@162 -- # true 00:22:48.819 21:36:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:48.819 21:36:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:48.819 21:36:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:48.819 21:36:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:48.819 21:36:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:48.819 21:36:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:48.819 21:36:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:48.819 21:36:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:48.819 21:36:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:48.819 21:36:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:48.819 21:36:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:48.819 21:36:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:48.819 21:36:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:48.819 21:36:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:48.819 21:36:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:48.819 21:36:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:48.819 21:36:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:49.077 21:36:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:49.077 21:36:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:49.077 21:36:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:49.077 21:36:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:49.077 21:36:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:49.077 21:36:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:49.077 21:36:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:49.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:22:49.077 00:22:49.077 --- 10.0.0.2 ping statistics --- 00:22:49.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.077 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:22:49.077 21:36:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:49.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:49.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:22:49.077 00:22:49.077 --- 10.0.0.3 ping statistics --- 00:22:49.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.078 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:49.078 21:36:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:49.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:22:49.078 00:22:49.078 --- 10.0.0.1 ping statistics --- 00:22:49.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.078 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:49.078 21:36:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.078 21:36:09 -- nvmf/common.sh@421 -- # return 0 00:22:49.078 21:36:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:49.078 21:36:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.078 21:36:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:49.078 21:36:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:49.078 21:36:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.078 21:36:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:49.078 21:36:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:49.078 21:36:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:49.078 21:36:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:49.078 21:36:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:49.078 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:49.078 21:36:09 -- nvmf/common.sh@469 -- # nvmfpid=76505 00:22:49.078 21:36:09 -- nvmf/common.sh@470 -- # waitforlisten 76505 00:22:49.078 21:36:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:49.078 21:36:09 -- common/autotest_common.sh@819 -- # '[' -z 76505 ']' 00:22:49.078 21:36:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.078 21:36:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.078 21:36:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.078 21:36:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.078 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:22:49.078 [2024-07-11 21:36:09.928369] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:49.078 [2024-07-11 21:36:09.928480] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:49.336 [2024-07-11 21:36:10.069251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.336 [2024-07-11 21:36:10.162582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:49.336 [2024-07-11 21:36:10.162753] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.336 [2024-07-11 21:36:10.162767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.336 [2024-07-11 21:36:10.162777] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.336 [2024-07-11 21:36:10.162949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.336 [2024-07-11 21:36:10.163101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:49.336 [2024-07-11 21:36:10.163233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.336 [2024-07-11 21:36:10.163235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:50.271 21:36:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.271 21:36:10 -- common/autotest_common.sh@852 -- # return 0 00:22:50.271 21:36:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:50.271 21:36:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:50.271 21:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.271 21:36:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.271 21:36:10 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.271 21:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.271 21:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.271 [2024-07-11 21:36:10.933135] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.271 21:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.271 21:36:10 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:50.271 21:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.271 21:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.271 Malloc0 00:22:50.271 21:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.271 21:36:10 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.271 21:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.271 21:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.271 21:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.271 21:36:10 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.271 21:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.271 21:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.271 21:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.271 21:36:10 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.271 21:36:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.271 21:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.271 [2024-07-11 21:36:10.973305] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.271 21:36:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.271 21:36:10 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:50.271 21:36:10 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:50.271 21:36:10 -- nvmf/common.sh@520 -- # config=() 00:22:50.271 21:36:10 -- nvmf/common.sh@520 -- # local subsystem config 00:22:50.271 21:36:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:50.271 21:36:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:50.271 { 00:22:50.271 "params": { 00:22:50.271 "name": "Nvme$subsystem", 00:22:50.271 "trtype": "$TEST_TRANSPORT", 00:22:50.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.271 "adrfam": "ipv4", 00:22:50.271 "trsvcid": "$NVMF_PORT", 00:22:50.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.271 "hdgst": ${hdgst:-false}, 00:22:50.271 "ddgst": ${ddgst:-false} 00:22:50.271 }, 00:22:50.271 "method": "bdev_nvme_attach_controller" 00:22:50.271 } 00:22:50.271 EOF 00:22:50.271 )") 00:22:50.271 21:36:10 -- nvmf/common.sh@542 -- # cat 00:22:50.271 21:36:10 -- nvmf/common.sh@544 -- # jq . 00:22:50.271 21:36:10 -- nvmf/common.sh@545 -- # IFS=, 00:22:50.271 21:36:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:50.271 "params": { 00:22:50.271 "name": "Nvme1", 00:22:50.271 "trtype": "tcp", 00:22:50.271 "traddr": "10.0.0.2", 00:22:50.271 "adrfam": "ipv4", 00:22:50.271 "trsvcid": "4420", 00:22:50.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.271 "hdgst": false, 00:22:50.271 "ddgst": false 00:22:50.271 }, 00:22:50.271 "method": "bdev_nvme_attach_controller" 00:22:50.271 }' 00:22:50.271 [2024-07-11 21:36:11.020374] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:50.271 [2024-07-11 21:36:11.020472] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76541 ] 00:22:50.271 [2024-07-11 21:36:11.155162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.529 [2024-07-11 21:36:11.253610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.529 [2024-07-11 21:36:11.253663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.529 [2024-07-11 21:36:11.253666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.529 [2024-07-11 21:36:11.416541] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:22:50.529 [2024-07-11 21:36:11.416596] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:22:50.529 I/O targets: 00:22:50.529 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:50.529 00:22:50.529 00:22:50.529 CUnit - A unit testing framework for C - Version 2.1-3 00:22:50.529 http://cunit.sourceforge.net/ 00:22:50.529 00:22:50.529 00:22:50.529 Suite: bdevio tests on: Nvme1n1 00:22:50.529 Test: blockdev write read block ...passed 00:22:50.530 Test: blockdev write zeroes read block ...passed 00:22:50.530 Test: blockdev write zeroes read no split ...passed 00:22:50.530 Test: blockdev write zeroes read split ...passed 00:22:50.530 Test: blockdev write zeroes read split partial ...passed 00:22:50.530 Test: blockdev reset ...[2024-07-11 21:36:11.456329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.530 [2024-07-11 21:36:11.456450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d0ee0 (9): Bad file descriptor 00:22:50.530 [2024-07-11 21:36:11.475119] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.530 passed 00:22:50.530 Test: blockdev write read 8 blocks ...passed 00:22:50.530 Test: blockdev write read size > 128k ...passed 00:22:50.530 Test: blockdev write read invalid size ...passed 00:22:50.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:50.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:50.530 Test: blockdev write read max offset ...passed 00:22:50.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:50.789 Test: blockdev writev readv 8 blocks ...passed 00:22:50.789 Test: blockdev writev readv 30 x 1block ...passed 00:22:50.789 Test: blockdev writev readv block ...passed 00:22:50.789 Test: blockdev writev readv size > 128k ...passed 00:22:50.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:50.789 Test: blockdev comparev and writev ...[2024-07-11 21:36:11.484681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.484855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.484966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.485062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.485556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.485684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.485808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.485909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.486453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.486573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.486677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.486768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.487217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.487343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.487460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:50.789 [2024-07-11 21:36:11.487585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:50.789 passed 00:22:50.789 Test: blockdev nvme passthru rw ...passed 00:22:50.789 Test: blockdev nvme passthru vendor specific ...[2024-07-11 21:36:11.488603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.789 [2024-07-11 21:36:11.488724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.489025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.789 [2024-07-11 21:36:11.489141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.489434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.789 [2024-07-11 21:36:11.489573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:50.789 [2024-07-11 21:36:11.489892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.789 [2024-07-11 21:36:11.490012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:50.789 passed 00:22:50.789 Test: blockdev nvme admin passthru ...passed 00:22:50.789 Test: blockdev copy ...passed 00:22:50.789 00:22:50.789 Run Summary: Type Total Ran Passed Failed Inactive 00:22:50.789 suites 1 1 n/a 0 0 00:22:50.789 tests 23 23 23 0 0 00:22:50.789 asserts 152 152 152 0 n/a 00:22:50.789 00:22:50.789 Elapsed time = 0.173 seconds 00:22:51.048 21:36:11 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.048 21:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.048 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:22:51.048 21:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.048 21:36:11 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:51.048 21:36:11 -- target/bdevio.sh@30 -- # nvmftestfini 00:22:51.048 21:36:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:51.048 21:36:11 -- nvmf/common.sh@116 -- # sync 00:22:51.048 21:36:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:51.048 21:36:11 -- nvmf/common.sh@119 -- # set +e 00:22:51.048 21:36:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:51.048 21:36:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:51.048 rmmod nvme_tcp 00:22:51.048 rmmod nvme_fabrics 00:22:51.048 rmmod nvme_keyring 00:22:51.048 21:36:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:51.307 21:36:11 -- nvmf/common.sh@123 -- # set -e 00:22:51.307 21:36:12 -- nvmf/common.sh@124 -- # return 0 00:22:51.307 21:36:12 -- nvmf/common.sh@477 -- # '[' -n 76505 ']' 00:22:51.307 21:36:12 -- nvmf/common.sh@478 -- # killprocess 76505 00:22:51.307 21:36:12 -- common/autotest_common.sh@926 -- # '[' -z 76505 ']' 00:22:51.307 21:36:12 -- common/autotest_common.sh@930 -- # kill -0 76505 00:22:51.307 21:36:12 -- common/autotest_common.sh@931 -- # uname 00:22:51.307 21:36:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:51.307 21:36:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76505 00:22:51.307 21:36:12 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:22:51.307 21:36:12 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:22:51.307 21:36:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76505' 00:22:51.307 killing process with pid 76505 00:22:51.307 21:36:12 -- common/autotest_common.sh@945 -- # kill 76505 00:22:51.307 21:36:12 -- common/autotest_common.sh@950 -- # wait 76505 00:22:51.566 21:36:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:51.566 21:36:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:51.566 21:36:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:51.566 21:36:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.566 21:36:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:51.566 21:36:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.566 21:36:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.566 21:36:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.566 21:36:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:51.566 00:22:51.566 real 0m3.061s 00:22:51.566 user 0m10.149s 00:22:51.566 sys 0m1.252s 00:22:51.566 21:36:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.566 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:22:51.566 ************************************ 00:22:51.566 END TEST nvmf_bdevio_no_huge 00:22:51.566 ************************************ 00:22:51.566 21:36:12 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:51.566 21:36:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:51.566 21:36:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:51.566 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:22:51.566 ************************************ 00:22:51.566 START TEST nvmf_tls 00:22:51.566 ************************************ 00:22:51.566 21:36:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:51.824 * Looking for test storage... 00:22:51.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:51.824 21:36:12 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.824 21:36:12 -- nvmf/common.sh@7 -- # uname -s 00:22:51.824 21:36:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.824 21:36:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.824 21:36:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.824 21:36:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.824 21:36:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.824 21:36:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.824 21:36:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.824 21:36:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.824 21:36:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.824 21:36:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.824 21:36:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:51.824 21:36:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:22:51.824 21:36:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.824 21:36:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.824 21:36:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.824 21:36:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.824 21:36:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.824 21:36:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.824 21:36:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.824 21:36:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.824 21:36:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.824 21:36:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.824 21:36:12 -- paths/export.sh@5 -- # export PATH 00:22:51.824 21:36:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.825 21:36:12 -- nvmf/common.sh@46 -- # : 0 00:22:51.825 21:36:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:51.825 21:36:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:51.825 21:36:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:51.825 21:36:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.825 21:36:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.825 21:36:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:51.825 21:36:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:51.825 21:36:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:51.825 21:36:12 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.825 21:36:12 -- target/tls.sh@71 -- # nvmftestinit 00:22:51.825 21:36:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:51.825 21:36:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.825 21:36:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:51.825 21:36:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:51.825 21:36:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:51.825 21:36:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.825 21:36:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.825 21:36:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.825 21:36:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:51.825 21:36:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:51.825 21:36:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:51.825 21:36:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:51.825 21:36:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:51.825 21:36:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:51.825 21:36:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.825 21:36:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.825 21:36:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:51.825 21:36:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:51.825 21:36:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:51.825 21:36:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:51.825 21:36:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:51.825 21:36:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.825 21:36:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:51.825 21:36:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:51.825 21:36:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:51.825 21:36:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:51.825 21:36:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:51.825 21:36:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:51.825 Cannot find device "nvmf_tgt_br" 00:22:51.825 21:36:12 -- nvmf/common.sh@154 -- # true 00:22:51.825 21:36:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.825 Cannot find device "nvmf_tgt_br2" 00:22:51.825 21:36:12 -- nvmf/common.sh@155 -- # true 00:22:51.825 21:36:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:51.825 21:36:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:51.825 Cannot find device "nvmf_tgt_br" 00:22:51.825 21:36:12 -- nvmf/common.sh@157 -- # true 00:22:51.825 21:36:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:51.825 Cannot find device "nvmf_tgt_br2" 00:22:51.825 21:36:12 -- nvmf/common.sh@158 -- # true 00:22:51.825 21:36:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:51.825 21:36:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:51.825 21:36:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.825 21:36:12 -- nvmf/common.sh@161 -- # true 00:22:51.825 21:36:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.825 21:36:12 -- nvmf/common.sh@162 -- # true 00:22:51.825 21:36:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:51.825 21:36:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:51.825 21:36:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:51.825 21:36:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:51.825 21:36:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:51.825 21:36:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.082 21:36:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.082 21:36:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:52.082 21:36:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:52.082 21:36:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:52.082 21:36:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:52.082 21:36:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:52.083 21:36:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:52.083 21:36:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.083 21:36:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.083 21:36:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.083 21:36:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:52.083 21:36:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:52.083 21:36:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.083 21:36:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.083 21:36:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.083 21:36:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.083 21:36:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.083 21:36:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:52.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:22:52.083 00:22:52.083 --- 10.0.0.2 ping statistics --- 00:22:52.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.083 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:22:52.083 21:36:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:52.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:52.083 00:22:52.083 --- 10.0.0.3 ping statistics --- 00:22:52.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.083 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:52.083 21:36:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:52.083 00:22:52.083 --- 10.0.0.1 ping statistics --- 00:22:52.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.083 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:52.083 21:36:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.083 21:36:12 -- nvmf/common.sh@421 -- # return 0 00:22:52.083 21:36:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:52.083 21:36:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.083 21:36:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:52.083 21:36:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:52.083 21:36:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.083 21:36:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:52.083 21:36:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:52.083 21:36:12 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:52.083 21:36:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:52.083 21:36:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:52.083 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:22:52.083 21:36:12 -- nvmf/common.sh@469 -- # nvmfpid=76720 00:22:52.083 21:36:12 -- nvmf/common.sh@470 -- # waitforlisten 76720 00:22:52.083 21:36:12 -- common/autotest_common.sh@819 -- # '[' -z 76720 ']' 00:22:52.083 21:36:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.083 21:36:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:52.083 21:36:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:52.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.083 21:36:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.083 21:36:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:52.083 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:22:52.083 [2024-07-11 21:36:12.981526] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:52.083 [2024-07-11 21:36:12.982514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.340 [2024-07-11 21:36:13.118881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.340 [2024-07-11 21:36:13.208712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:52.340 [2024-07-11 21:36:13.208867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.340 [2024-07-11 21:36:13.208880] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.340 [2024-07-11 21:36:13.208890] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.340 [2024-07-11 21:36:13.208917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.340 21:36:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:52.340 21:36:13 -- common/autotest_common.sh@852 -- # return 0 00:22:52.340 21:36:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:52.340 21:36:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:52.340 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:22:52.340 21:36:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.340 21:36:13 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:22:52.340 21:36:13 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:52.905 true 00:22:52.905 21:36:13 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.905 21:36:13 -- target/tls.sh@82 -- # jq -r .tls_version 00:22:52.905 21:36:13 -- target/tls.sh@82 -- # version=0 00:22:52.905 21:36:13 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:22:52.905 21:36:13 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:53.471 21:36:14 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.471 21:36:14 -- target/tls.sh@90 -- # jq -r .tls_version 00:22:53.471 21:36:14 -- target/tls.sh@90 -- # version=13 00:22:53.471 21:36:14 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:22:53.471 21:36:14 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:53.729 21:36:14 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.729 21:36:14 -- target/tls.sh@98 -- # jq -r .tls_version 00:22:53.987 21:36:14 -- target/tls.sh@98 -- # version=7 00:22:53.987 21:36:14 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:22:53.987 21:36:14 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.987 21:36:14 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:54.246 21:36:15 -- target/tls.sh@105 -- # ktls=false 00:22:54.246 21:36:15 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:22:54.246 21:36:15 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:54.504 21:36:15 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.504 21:36:15 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:54.762 21:36:15 -- target/tls.sh@113 -- # ktls=true 00:22:54.762 21:36:15 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:22:54.762 21:36:15 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:55.021 21:36:15 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:55.021 21:36:15 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:22:55.280 21:36:16 -- target/tls.sh@121 -- # ktls=false 00:22:55.280 21:36:16 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:22:55.280 21:36:16 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:22:55.280 21:36:16 -- target/tls.sh@49 -- # local key hash crc 00:22:55.280 21:36:16 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:22:55.280 21:36:16 -- target/tls.sh@51 -- # hash=01 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # gzip -1 -c 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # tail -c8 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # head -c 4 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # crc='p$H�' 00:22:55.280 21:36:16 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:55.280 21:36:16 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:22:55.280 21:36:16 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.280 21:36:16 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.280 21:36:16 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:22:55.280 21:36:16 -- target/tls.sh@49 -- # local key hash crc 00:22:55.280 21:36:16 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:22:55.280 21:36:16 -- target/tls.sh@51 -- # hash=01 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # gzip -1 -c 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # tail -c8 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # head -c 4 00:22:55.280 21:36:16 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:22:55.280 21:36:16 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:55.280 21:36:16 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:22:55.280 21:36:16 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.280 21:36:16 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.280 21:36:16 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:22:55.280 21:36:16 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:22:55.280 21:36:16 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.280 21:36:16 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.280 21:36:16 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:22:55.280 21:36:16 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:22:55.280 21:36:16 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:55.538 21:36:16 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:55.796 21:36:16 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:22:55.796 21:36:16 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:22:55.796 21:36:16 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.054 [2024-07-11 21:36:16.884154] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.054 21:36:16 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:56.313 21:36:17 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:56.572 [2024-07-11 21:36:17.368279] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.572 [2024-07-11 21:36:17.368556] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.572 21:36:17 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.829 malloc0 00:22:56.829 21:36:17 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:57.108 21:36:17 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:22:57.366 21:36:18 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:07.378 Initializing NVMe Controllers 00:23:07.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.378 Initialization complete. Launching workers. 00:23:07.378 ======================================================== 00:23:07.378 Latency(us) 00:23:07.378 Device Information : IOPS MiB/s Average min max 00:23:07.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9831.79 38.41 6510.90 1455.13 12453.36 00:23:07.378 ======================================================== 00:23:07.378 Total : 9831.79 38.41 6510.90 1455.13 12453.36 00:23:07.378 00:23:07.378 21:36:28 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:07.378 21:36:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.378 21:36:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.378 21:36:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.378 21:36:28 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:23:07.378 21:36:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.378 21:36:28 -- target/tls.sh@28 -- # bdevperf_pid=76954 00:23:07.378 21:36:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.378 21:36:28 -- target/tls.sh@31 -- # waitforlisten 76954 /var/tmp/bdevperf.sock 00:23:07.378 21:36:28 -- common/autotest_common.sh@819 -- # '[' -z 76954 ']' 00:23:07.378 21:36:28 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.378 21:36:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.378 21:36:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:07.378 21:36:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.378 21:36:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:07.378 21:36:28 -- common/autotest_common.sh@10 -- # set +x 00:23:07.637 [2024-07-11 21:36:28.355435] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:07.637 [2024-07-11 21:36:28.355591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76954 ] 00:23:07.637 [2024-07-11 21:36:28.498570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.895 [2024-07-11 21:36:28.602327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.461 21:36:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:08.461 21:36:29 -- common/autotest_common.sh@852 -- # return 0 00:23:08.461 21:36:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:08.720 [2024-07-11 21:36:29.530345] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.720 TLSTESTn1 00:23:08.720 21:36:29 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:08.978 Running I/O for 10 seconds... 00:23:18.943 00:23:18.943 Latency(us) 00:23:18.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.943 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.943 Verification LBA range: start 0x0 length 0x2000 00:23:18.943 TLSTESTn1 : 10.01 5728.64 22.38 0.00 0.00 22307.41 5153.51 28120.90 00:23:18.943 =================================================================================================================== 00:23:18.943 Total : 5728.64 22.38 0.00 0.00 22307.41 5153.51 28120.90 00:23:18.943 0 00:23:18.943 21:36:39 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.943 21:36:39 -- target/tls.sh@45 -- # killprocess 76954 00:23:18.943 21:36:39 -- common/autotest_common.sh@926 -- # '[' -z 76954 ']' 00:23:18.943 21:36:39 -- common/autotest_common.sh@930 -- # kill -0 76954 00:23:18.943 21:36:39 -- common/autotest_common.sh@931 -- # uname 00:23:18.943 21:36:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:18.943 21:36:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76954 00:23:18.943 21:36:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:18.943 21:36:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:18.943 killing process with pid 76954 00:23:18.943 21:36:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76954' 00:23:18.943 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.943 00:23:18.943 Latency(us) 00:23:18.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.943 =================================================================================================================== 00:23:18.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.943 21:36:39 -- common/autotest_common.sh@945 -- # kill 76954 00:23:18.943 21:36:39 -- common/autotest_common.sh@950 -- # wait 76954 00:23:19.201 21:36:39 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:23:19.201 21:36:39 -- common/autotest_common.sh@640 -- # local es=0 00:23:19.201 21:36:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:23:19.201 21:36:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:23:19.201 21:36:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:19.201 21:36:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:23:19.201 21:36:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:19.201 21:36:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:23:19.201 21:36:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.201 21:36:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:19.201 21:36:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.201 21:36:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:23:19.201 21:36:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.201 21:36:39 -- target/tls.sh@28 -- # bdevperf_pid=77087 00:23:19.201 21:36:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.201 21:36:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.201 21:36:39 -- target/tls.sh@31 -- # waitforlisten 77087 /var/tmp/bdevperf.sock 00:23:19.201 21:36:39 -- common/autotest_common.sh@819 -- # '[' -z 77087 ']' 00:23:19.201 21:36:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.201 21:36:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.201 21:36:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.201 21:36:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:19.202 21:36:39 -- common/autotest_common.sh@10 -- # set +x 00:23:19.202 [2024-07-11 21:36:40.042526] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:19.202 [2024-07-11 21:36:40.042643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77087 ] 00:23:19.460 [2024-07-11 21:36:40.188140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.460 [2024-07-11 21:36:40.281910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.027 21:36:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:20.027 21:36:40 -- common/autotest_common.sh@852 -- # return 0 00:23:20.027 21:36:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:23:20.285 [2024-07-11 21:36:41.193668] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.285 [2024-07-11 21:36:41.201407] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.285 [2024-07-11 21:36:41.202362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f84f0 (107): Transport endpoint is not connected 00:23:20.285 [2024-07-11 21:36:41.203349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f84f0 (9): Bad file descriptor 00:23:20.285 [2024-07-11 21:36:41.204346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.285 [2024-07-11 21:36:41.204366] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.285 [2024-07-11 21:36:41.204377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.285 request: 00:23:20.285 { 00:23:20.285 "name": "TLSTEST", 00:23:20.285 "trtype": "tcp", 00:23:20.285 "traddr": "10.0.0.2", 00:23:20.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.285 "adrfam": "ipv4", 00:23:20.285 "trsvcid": "4420", 00:23:20.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.285 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:23:20.285 "method": "bdev_nvme_attach_controller", 00:23:20.285 "req_id": 1 00:23:20.285 } 00:23:20.285 Got JSON-RPC error response 00:23:20.285 response: 00:23:20.285 { 00:23:20.285 "code": -32602, 00:23:20.285 "message": "Invalid parameters" 00:23:20.285 } 00:23:20.285 21:36:41 -- target/tls.sh@36 -- # killprocess 77087 00:23:20.285 21:36:41 -- common/autotest_common.sh@926 -- # '[' -z 77087 ']' 00:23:20.285 21:36:41 -- common/autotest_common.sh@930 -- # kill -0 77087 00:23:20.285 21:36:41 -- common/autotest_common.sh@931 -- # uname 00:23:20.285 21:36:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:20.285 21:36:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77087 00:23:20.544 21:36:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:20.544 21:36:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:20.544 killing process with pid 77087 00:23:20.544 21:36:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77087' 00:23:20.544 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.544 00:23:20.544 Latency(us) 00:23:20.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.544 =================================================================================================================== 00:23:20.544 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.544 21:36:41 -- common/autotest_common.sh@945 -- # kill 77087 00:23:20.544 21:36:41 -- common/autotest_common.sh@950 -- # wait 77087 00:23:20.544 21:36:41 -- target/tls.sh@37 -- # return 1 00:23:20.544 21:36:41 -- common/autotest_common.sh@643 -- # es=1 00:23:20.544 21:36:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:20.544 21:36:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:20.544 21:36:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:20.544 21:36:41 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:20.544 21:36:41 -- common/autotest_common.sh@640 -- # local es=0 00:23:20.544 21:36:41 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:20.544 21:36:41 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:23:20.544 21:36:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.544 21:36:41 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:23:20.544 21:36:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.544 21:36:41 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:20.544 21:36:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.544 21:36:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.544 21:36:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:20.544 21:36:41 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:23:20.544 21:36:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.544 21:36:41 -- target/tls.sh@28 -- # bdevperf_pid=77115 00:23:20.544 21:36:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.544 21:36:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.544 21:36:41 -- target/tls.sh@31 -- # waitforlisten 77115 /var/tmp/bdevperf.sock 00:23:20.544 21:36:41 -- common/autotest_common.sh@819 -- # '[' -z 77115 ']' 00:23:20.544 21:36:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.544 21:36:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:20.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.544 21:36:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.544 21:36:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:20.544 21:36:41 -- common/autotest_common.sh@10 -- # set +x 00:23:20.802 [2024-07-11 21:36:41.505808] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:20.802 [2024-07-11 21:36:41.505920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77115 ] 00:23:20.802 [2024-07-11 21:36:41.639387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.802 [2024-07-11 21:36:41.738811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.736 21:36:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:21.736 21:36:42 -- common/autotest_common.sh@852 -- # return 0 00:23:21.736 21:36:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:21.994 [2024-07-11 21:36:42.753625] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.994 [2024-07-11 21:36:42.761505] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:21.994 [2024-07-11 21:36:42.761553] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:21.994 [2024-07-11 21:36:42.761624] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:21.994 [2024-07-11 21:36:42.762440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb654f0 (107): Transport endpoint is not connected 00:23:21.994 [2024-07-11 21:36:42.763424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb654f0 (9): Bad file descriptor 00:23:21.994 [2024-07-11 21:36:42.764426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.994 [2024-07-11 21:36:42.764465] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:21.994 [2024-07-11 21:36:42.764477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.994 request: 00:23:21.994 { 00:23:21.994 "name": "TLSTEST", 00:23:21.994 "trtype": "tcp", 00:23:21.994 "traddr": "10.0.0.2", 00:23:21.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:21.994 "adrfam": "ipv4", 00:23:21.994 "trsvcid": "4420", 00:23:21.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.994 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:23:21.994 "method": "bdev_nvme_attach_controller", 00:23:21.994 "req_id": 1 00:23:21.994 } 00:23:21.994 Got JSON-RPC error response 00:23:21.994 response: 00:23:21.994 { 00:23:21.994 "code": -32602, 00:23:21.994 "message": "Invalid parameters" 00:23:21.994 } 00:23:21.994 21:36:42 -- target/tls.sh@36 -- # killprocess 77115 00:23:21.994 21:36:42 -- common/autotest_common.sh@926 -- # '[' -z 77115 ']' 00:23:21.994 21:36:42 -- common/autotest_common.sh@930 -- # kill -0 77115 00:23:21.994 21:36:42 -- common/autotest_common.sh@931 -- # uname 00:23:21.994 21:36:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:21.994 21:36:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77115 00:23:21.994 killing process with pid 77115 00:23:21.994 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.994 00:23:21.994 Latency(us) 00:23:21.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.994 =================================================================================================================== 00:23:21.994 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:21.994 21:36:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:21.994 21:36:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:21.994 21:36:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77115' 00:23:21.994 21:36:42 -- common/autotest_common.sh@945 -- # kill 77115 00:23:21.994 21:36:42 -- common/autotest_common.sh@950 -- # wait 77115 00:23:22.259 21:36:43 -- target/tls.sh@37 -- # return 1 00:23:22.259 21:36:43 -- common/autotest_common.sh@643 -- # es=1 00:23:22.259 21:36:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:22.259 21:36:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:22.259 21:36:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:22.259 21:36:43 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:22.259 21:36:43 -- common/autotest_common.sh@640 -- # local es=0 00:23:22.259 21:36:43 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:22.259 21:36:43 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:23:22.259 21:36:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.259 21:36:43 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:23:22.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.259 21:36:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.259 21:36:43 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:22.259 21:36:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.259 21:36:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:22.259 21:36:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.259 21:36:43 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:23:22.259 21:36:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.259 21:36:43 -- target/tls.sh@28 -- # bdevperf_pid=77141 00:23:22.259 21:36:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.259 21:36:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.259 21:36:43 -- target/tls.sh@31 -- # waitforlisten 77141 /var/tmp/bdevperf.sock 00:23:22.259 21:36:43 -- common/autotest_common.sh@819 -- # '[' -z 77141 ']' 00:23:22.259 21:36:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.259 21:36:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:22.259 21:36:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.259 21:36:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:22.259 21:36:43 -- common/autotest_common.sh@10 -- # set +x 00:23:22.259 [2024-07-11 21:36:43.059331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:22.259 [2024-07-11 21:36:43.059808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77141 ] 00:23:22.259 [2024-07-11 21:36:43.195906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.517 [2024-07-11 21:36:43.287086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.450 21:36:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:23.450 21:36:44 -- common/autotest_common.sh@852 -- # return 0 00:23:23.450 21:36:44 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:23:23.712 [2024-07-11 21:36:44.430777] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.712 [2024-07-11 21:36:44.441725] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:23.712 [2024-07-11 21:36:44.441775] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:23.712 [2024-07-11 21:36:44.441836] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:23.712 [2024-07-11 21:36:44.442533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8b4f0 (107): Transport endpoint is not connected 00:23:23.712 [2024-07-11 21:36:44.443522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8b4f0 (9): Bad file descriptor 00:23:23.712 [2024-07-11 21:36:44.444519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:23.712 [2024-07-11 21:36:44.444540] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:23.712 [2024-07-11 21:36:44.444551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:23.712 request: 00:23:23.712 { 00:23:23.712 "name": "TLSTEST", 00:23:23.712 "trtype": "tcp", 00:23:23.712 "traddr": "10.0.0.2", 00:23:23.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.712 "adrfam": "ipv4", 00:23:23.712 "trsvcid": "4420", 00:23:23.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:23.712 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:23:23.712 "method": "bdev_nvme_attach_controller", 00:23:23.712 "req_id": 1 00:23:23.712 } 00:23:23.712 Got JSON-RPC error response 00:23:23.712 response: 00:23:23.712 { 00:23:23.712 "code": -32602, 00:23:23.712 "message": "Invalid parameters" 00:23:23.712 } 00:23:23.712 21:36:44 -- target/tls.sh@36 -- # killprocess 77141 00:23:23.712 21:36:44 -- common/autotest_common.sh@926 -- # '[' -z 77141 ']' 00:23:23.712 21:36:44 -- common/autotest_common.sh@930 -- # kill -0 77141 00:23:23.712 21:36:44 -- common/autotest_common.sh@931 -- # uname 00:23:23.712 21:36:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:23.712 21:36:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77141 00:23:23.712 killing process with pid 77141 00:23:23.712 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.712 00:23:23.712 Latency(us) 00:23:23.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.712 =================================================================================================================== 00:23:23.712 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.712 21:36:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:23.712 21:36:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:23.712 21:36:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77141' 00:23:23.712 21:36:44 -- common/autotest_common.sh@945 -- # kill 77141 00:23:23.712 21:36:44 -- common/autotest_common.sh@950 -- # wait 77141 00:23:23.975 21:36:44 -- target/tls.sh@37 -- # return 1 00:23:23.975 21:36:44 -- common/autotest_common.sh@643 -- # es=1 00:23:23.975 21:36:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:23.975 21:36:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:23.975 21:36:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:23.975 21:36:44 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:23.975 21:36:44 -- common/autotest_common.sh@640 -- # local es=0 00:23:23.975 21:36:44 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:23.975 21:36:44 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:23:23.975 21:36:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:23.975 21:36:44 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:23:23.975 21:36:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:23.975 21:36:44 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:23.975 21:36:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.975 21:36:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.975 21:36:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.975 21:36:44 -- target/tls.sh@23 -- # psk= 00:23:23.975 21:36:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.975 21:36:44 -- target/tls.sh@28 -- # bdevperf_pid=77170 00:23:23.975 21:36:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.975 21:36:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.975 21:36:44 -- target/tls.sh@31 -- # waitforlisten 77170 /var/tmp/bdevperf.sock 00:23:23.975 21:36:44 -- common/autotest_common.sh@819 -- # '[' -z 77170 ']' 00:23:23.975 21:36:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.975 21:36:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:23.975 21:36:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.975 21:36:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:23.975 21:36:44 -- common/autotest_common.sh@10 -- # set +x 00:23:23.975 [2024-07-11 21:36:44.747680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:23.975 [2024-07-11 21:36:44.747790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77170 ] 00:23:23.975 [2024-07-11 21:36:44.886862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.233 [2024-07-11 21:36:44.980186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.800 21:36:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:24.800 21:36:45 -- common/autotest_common.sh@852 -- # return 0 00:23:24.800 21:36:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:25.058 [2024-07-11 21:36:45.941969] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:25.058 [2024-07-11 21:36:45.943460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aee3a0 (9): Bad file descriptor 00:23:25.058 [2024-07-11 21:36:45.944454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:25.058 [2024-07-11 21:36:45.944478] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:25.058 [2024-07-11 21:36:45.944498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:25.058 request: 00:23:25.058 { 00:23:25.058 "name": "TLSTEST", 00:23:25.058 "trtype": "tcp", 00:23:25.058 "traddr": "10.0.0.2", 00:23:25.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.058 "adrfam": "ipv4", 00:23:25.058 "trsvcid": "4420", 00:23:25.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.058 "method": "bdev_nvme_attach_controller", 00:23:25.058 "req_id": 1 00:23:25.058 } 00:23:25.058 Got JSON-RPC error response 00:23:25.058 response: 00:23:25.058 { 00:23:25.058 "code": -32602, 00:23:25.058 "message": "Invalid parameters" 00:23:25.058 } 00:23:25.058 21:36:45 -- target/tls.sh@36 -- # killprocess 77170 00:23:25.058 21:36:45 -- common/autotest_common.sh@926 -- # '[' -z 77170 ']' 00:23:25.058 21:36:45 -- common/autotest_common.sh@930 -- # kill -0 77170 00:23:25.058 21:36:45 -- common/autotest_common.sh@931 -- # uname 00:23:25.058 21:36:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:25.058 21:36:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77170 00:23:25.058 killing process with pid 77170 00:23:25.058 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.058 00:23:25.058 Latency(us) 00:23:25.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.058 =================================================================================================================== 00:23:25.058 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.058 21:36:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:25.058 21:36:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:25.058 21:36:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77170' 00:23:25.058 21:36:45 -- common/autotest_common.sh@945 -- # kill 77170 00:23:25.058 21:36:45 -- common/autotest_common.sh@950 -- # wait 77170 00:23:25.316 21:36:46 -- target/tls.sh@37 -- # return 1 00:23:25.316 21:36:46 -- common/autotest_common.sh@643 -- # es=1 00:23:25.316 21:36:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:25.316 21:36:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:25.316 21:36:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:25.316 21:36:46 -- target/tls.sh@167 -- # killprocess 76720 00:23:25.316 21:36:46 -- common/autotest_common.sh@926 -- # '[' -z 76720 ']' 00:23:25.316 21:36:46 -- common/autotest_common.sh@930 -- # kill -0 76720 00:23:25.316 21:36:46 -- common/autotest_common.sh@931 -- # uname 00:23:25.316 21:36:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:25.316 21:36:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76720 00:23:25.317 killing process with pid 76720 00:23:25.317 21:36:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:25.317 21:36:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:25.317 21:36:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76720' 00:23:25.317 21:36:46 -- common/autotest_common.sh@945 -- # kill 76720 00:23:25.317 21:36:46 -- common/autotest_common.sh@950 -- # wait 76720 00:23:25.575 21:36:46 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:23:25.575 21:36:46 -- target/tls.sh@49 -- # local key hash crc 00:23:25.575 21:36:46 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:25.575 21:36:46 -- target/tls.sh@51 -- # hash=02 00:23:25.575 21:36:46 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:23:25.575 21:36:46 -- target/tls.sh@52 -- # gzip -1 -c 00:23:25.575 21:36:46 -- target/tls.sh@52 -- # tail -c8 00:23:25.575 21:36:46 -- target/tls.sh@52 -- # head -c 4 00:23:25.575 21:36:46 -- target/tls.sh@52 -- # crc='�e�'\''' 00:23:25.575 21:36:46 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:23:25.575 21:36:46 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:23:25.575 21:36:46 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:25.575 21:36:46 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:25.575 21:36:46 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:25.575 21:36:46 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:25.575 21:36:46 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:25.575 21:36:46 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:23:25.575 21:36:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:25.575 21:36:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:25.575 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:23:25.575 21:36:46 -- nvmf/common.sh@469 -- # nvmfpid=77218 00:23:25.575 21:36:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.575 21:36:46 -- nvmf/common.sh@470 -- # waitforlisten 77218 00:23:25.575 21:36:46 -- common/autotest_common.sh@819 -- # '[' -z 77218 ']' 00:23:25.575 21:36:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.575 21:36:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:25.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.575 21:36:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.575 21:36:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:25.575 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:23:25.575 [2024-07-11 21:36:46.522403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:25.575 [2024-07-11 21:36:46.522552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.834 [2024-07-11 21:36:46.659043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.834 [2024-07-11 21:36:46.751536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:25.834 [2024-07-11 21:36:46.751679] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.834 [2024-07-11 21:36:46.751693] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.834 [2024-07-11 21:36:46.751702] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.834 [2024-07-11 21:36:46.751730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.765 21:36:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:26.765 21:36:47 -- common/autotest_common.sh@852 -- # return 0 00:23:26.765 21:36:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:26.765 21:36:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:26.765 21:36:47 -- common/autotest_common.sh@10 -- # set +x 00:23:26.765 21:36:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.765 21:36:47 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:26.765 21:36:47 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:26.765 21:36:47 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:27.022 [2024-07-11 21:36:47.772715] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.022 21:36:47 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:27.279 21:36:48 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:27.279 [2024-07-11 21:36:48.216836] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.279 [2024-07-11 21:36:48.217083] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.569 21:36:48 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.569 malloc0 00:23:27.569 21:36:48 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.826 21:36:48 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:28.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.083 21:36:48 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:28.084 21:36:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.084 21:36:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.084 21:36:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.084 21:36:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:23:28.084 21:36:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.084 21:36:49 -- target/tls.sh@28 -- # bdevperf_pid=77267 00:23:28.084 21:36:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.084 21:36:49 -- target/tls.sh@31 -- # waitforlisten 77267 /var/tmp/bdevperf.sock 00:23:28.084 21:36:49 -- common/autotest_common.sh@819 -- # '[' -z 77267 ']' 00:23:28.084 21:36:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.084 21:36:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.084 21:36:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:28.084 21:36:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.084 21:36:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:28.084 21:36:49 -- common/autotest_common.sh@10 -- # set +x 00:23:28.342 [2024-07-11 21:36:49.045112] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:28.342 [2024-07-11 21:36:49.045209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77267 ] 00:23:28.342 [2024-07-11 21:36:49.183443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.342 [2024-07-11 21:36:49.283576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.272 21:36:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:29.272 21:36:49 -- common/autotest_common.sh@852 -- # return 0 00:23:29.272 21:36:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:29.272 [2024-07-11 21:36:50.217219] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.530 TLSTESTn1 00:23:29.530 21:36:50 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:29.530 Running I/O for 10 seconds... 00:23:39.582 00:23:39.582 Latency(us) 00:23:39.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.582 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:39.582 Verification LBA range: start 0x0 length 0x2000 00:23:39.582 TLSTESTn1 : 10.02 5306.92 20.73 0.00 0.00 24078.02 4557.73 22639.71 00:23:39.582 =================================================================================================================== 00:23:39.582 Total : 5306.92 20.73 0.00 0.00 24078.02 4557.73 22639.71 00:23:39.582 0 00:23:39.582 21:37:00 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.582 21:37:00 -- target/tls.sh@45 -- # killprocess 77267 00:23:39.582 21:37:00 -- common/autotest_common.sh@926 -- # '[' -z 77267 ']' 00:23:39.582 21:37:00 -- common/autotest_common.sh@930 -- # kill -0 77267 00:23:39.582 21:37:00 -- common/autotest_common.sh@931 -- # uname 00:23:39.582 21:37:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:39.582 21:37:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77267 00:23:39.582 killing process with pid 77267 00:23:39.582 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.582 00:23:39.582 Latency(us) 00:23:39.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.582 =================================================================================================================== 00:23:39.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.582 21:37:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:39.582 21:37:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:39.582 21:37:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77267' 00:23:39.582 21:37:00 -- common/autotest_common.sh@945 -- # kill 77267 00:23:39.582 21:37:00 -- common/autotest_common.sh@950 -- # wait 77267 00:23:39.840 21:37:00 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:39.840 21:37:00 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:39.840 21:37:00 -- common/autotest_common.sh@640 -- # local es=0 00:23:39.840 21:37:00 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:39.840 21:37:00 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:23:39.840 21:37:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:39.840 21:37:00 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:23:39.840 21:37:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:39.840 21:37:00 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:39.840 21:37:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:39.840 21:37:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:39.840 21:37:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:39.840 21:37:00 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:23:39.840 21:37:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.840 21:37:00 -- target/tls.sh@28 -- # bdevperf_pid=77406 00:23:39.841 21:37:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.841 21:37:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.841 21:37:00 -- target/tls.sh@31 -- # waitforlisten 77406 /var/tmp/bdevperf.sock 00:23:39.841 21:37:00 -- common/autotest_common.sh@819 -- # '[' -z 77406 ']' 00:23:39.841 21:37:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.841 21:37:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:39.841 21:37:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.841 21:37:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:39.841 21:37:00 -- common/autotest_common.sh@10 -- # set +x 00:23:39.841 [2024-07-11 21:37:00.753653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:39.841 [2024-07-11 21:37:00.754606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77406 ] 00:23:40.099 [2024-07-11 21:37:00.908058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.099 [2024-07-11 21:37:01.000460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.053 21:37:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:41.053 21:37:01 -- common/autotest_common.sh@852 -- # return 0 00:23:41.053 21:37:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:41.053 [2024-07-11 21:37:01.903716] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.053 [2024-07-11 21:37:01.903791] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:41.053 request: 00:23:41.053 { 00:23:41.053 "name": "TLSTEST", 00:23:41.053 "trtype": "tcp", 00:23:41.053 "traddr": "10.0.0.2", 00:23:41.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.053 "adrfam": "ipv4", 00:23:41.053 "trsvcid": "4420", 00:23:41.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.053 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:23:41.053 "method": "bdev_nvme_attach_controller", 00:23:41.053 "req_id": 1 00:23:41.053 } 00:23:41.053 Got JSON-RPC error response 00:23:41.053 response: 00:23:41.053 { 00:23:41.053 "code": -22, 00:23:41.053 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:23:41.053 } 00:23:41.053 21:37:01 -- target/tls.sh@36 -- # killprocess 77406 00:23:41.053 21:37:01 -- common/autotest_common.sh@926 -- # '[' -z 77406 ']' 00:23:41.053 21:37:01 -- common/autotest_common.sh@930 -- # kill -0 77406 00:23:41.053 21:37:01 -- common/autotest_common.sh@931 -- # uname 00:23:41.053 21:37:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:41.053 21:37:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77406 00:23:41.053 killing process with pid 77406 00:23:41.053 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.053 00:23:41.053 Latency(us) 00:23:41.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.053 =================================================================================================================== 00:23:41.053 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.053 21:37:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:41.053 21:37:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:41.053 21:37:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77406' 00:23:41.053 21:37:01 -- common/autotest_common.sh@945 -- # kill 77406 00:23:41.053 21:37:01 -- common/autotest_common.sh@950 -- # wait 77406 00:23:41.311 21:37:02 -- target/tls.sh@37 -- # return 1 00:23:41.311 21:37:02 -- common/autotest_common.sh@643 -- # es=1 00:23:41.311 21:37:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:41.311 21:37:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:41.311 21:37:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:41.311 21:37:02 -- target/tls.sh@183 -- # killprocess 77218 00:23:41.311 21:37:02 -- common/autotest_common.sh@926 -- # '[' -z 77218 ']' 00:23:41.311 21:37:02 -- common/autotest_common.sh@930 -- # kill -0 77218 00:23:41.311 21:37:02 -- common/autotest_common.sh@931 -- # uname 00:23:41.311 21:37:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:41.311 21:37:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77218 00:23:41.311 killing process with pid 77218 00:23:41.311 21:37:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:41.311 21:37:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:41.311 21:37:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77218' 00:23:41.311 21:37:02 -- common/autotest_common.sh@945 -- # kill 77218 00:23:41.311 21:37:02 -- common/autotest_common.sh@950 -- # wait 77218 00:23:41.570 21:37:02 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:41.570 21:37:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:41.570 21:37:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:41.570 21:37:02 -- common/autotest_common.sh@10 -- # set +x 00:23:41.570 21:37:02 -- nvmf/common.sh@469 -- # nvmfpid=77434 00:23:41.570 21:37:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:41.570 21:37:02 -- nvmf/common.sh@470 -- # waitforlisten 77434 00:23:41.570 21:37:02 -- common/autotest_common.sh@819 -- # '[' -z 77434 ']' 00:23:41.570 21:37:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.570 21:37:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:41.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.570 21:37:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.570 21:37:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:41.570 21:37:02 -- common/autotest_common.sh@10 -- # set +x 00:23:41.570 [2024-07-11 21:37:02.480745] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:41.570 [2024-07-11 21:37:02.480860] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.827 [2024-07-11 21:37:02.619008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.827 [2024-07-11 21:37:02.710702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:41.827 [2024-07-11 21:37:02.710861] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.827 [2024-07-11 21:37:02.710875] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.827 [2024-07-11 21:37:02.710884] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.827 [2024-07-11 21:37:02.710911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.758 21:37:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:42.758 21:37:03 -- common/autotest_common.sh@852 -- # return 0 00:23:42.758 21:37:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:42.758 21:37:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:42.759 21:37:03 -- common/autotest_common.sh@10 -- # set +x 00:23:42.759 21:37:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.759 21:37:03 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:42.759 21:37:03 -- common/autotest_common.sh@640 -- # local es=0 00:23:42.759 21:37:03 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:42.759 21:37:03 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:23:42.759 21:37:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:42.759 21:37:03 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:23:42.759 21:37:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:42.759 21:37:03 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:42.759 21:37:03 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:42.759 21:37:03 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.017 [2024-07-11 21:37:03.723606] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.017 21:37:03 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:43.276 21:37:03 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:43.276 [2024-07-11 21:37:04.219757] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.276 [2024-07-11 21:37:04.220026] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.535 21:37:04 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:43.535 malloc0 00:23:43.535 21:37:04 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:43.793 21:37:04 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:44.051 [2024-07-11 21:37:04.999279] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:44.051 [2024-07-11 21:37:04.999341] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:44.051 [2024-07-11 21:37:04.999362] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:23:44.311 request: 00:23:44.311 { 00:23:44.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.311 "host": "nqn.2016-06.io.spdk:host1", 00:23:44.311 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:23:44.311 "method": "nvmf_subsystem_add_host", 00:23:44.311 "req_id": 1 00:23:44.311 } 00:23:44.311 Got JSON-RPC error response 00:23:44.311 response: 00:23:44.311 { 00:23:44.311 "code": -32603, 00:23:44.311 "message": "Internal error" 00:23:44.311 } 00:23:44.311 21:37:05 -- common/autotest_common.sh@643 -- # es=1 00:23:44.311 21:37:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:44.311 21:37:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:44.311 21:37:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:44.311 21:37:05 -- target/tls.sh@189 -- # killprocess 77434 00:23:44.311 21:37:05 -- common/autotest_common.sh@926 -- # '[' -z 77434 ']' 00:23:44.311 21:37:05 -- common/autotest_common.sh@930 -- # kill -0 77434 00:23:44.311 21:37:05 -- common/autotest_common.sh@931 -- # uname 00:23:44.311 21:37:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:44.311 21:37:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77434 00:23:44.311 21:37:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:44.311 21:37:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:44.311 21:37:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77434' 00:23:44.311 killing process with pid 77434 00:23:44.311 21:37:05 -- common/autotest_common.sh@945 -- # kill 77434 00:23:44.311 21:37:05 -- common/autotest_common.sh@950 -- # wait 77434 00:23:44.569 21:37:05 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:44.569 21:37:05 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:23:44.569 21:37:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:44.569 21:37:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:44.569 21:37:05 -- common/autotest_common.sh@10 -- # set +x 00:23:44.569 21:37:05 -- nvmf/common.sh@469 -- # nvmfpid=77502 00:23:44.569 21:37:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:44.569 21:37:05 -- nvmf/common.sh@470 -- # waitforlisten 77502 00:23:44.569 21:37:05 -- common/autotest_common.sh@819 -- # '[' -z 77502 ']' 00:23:44.569 21:37:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.569 21:37:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:44.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.569 21:37:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.569 21:37:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:44.569 21:37:05 -- common/autotest_common.sh@10 -- # set +x 00:23:44.569 [2024-07-11 21:37:05.334639] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:44.569 [2024-07-11 21:37:05.334782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.569 [2024-07-11 21:37:05.480618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.826 [2024-07-11 21:37:05.575399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:44.826 [2024-07-11 21:37:05.575557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.826 [2024-07-11 21:37:05.575572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.826 [2024-07-11 21:37:05.575582] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.826 [2024-07-11 21:37:05.575614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.389 21:37:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:45.389 21:37:06 -- common/autotest_common.sh@852 -- # return 0 00:23:45.389 21:37:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:45.389 21:37:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:45.389 21:37:06 -- common/autotest_common.sh@10 -- # set +x 00:23:45.389 21:37:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.389 21:37:06 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:45.389 21:37:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:45.389 21:37:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:45.647 [2024-07-11 21:37:06.463787] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.647 21:37:06 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:45.904 21:37:06 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:46.161 [2024-07-11 21:37:07.003921] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.161 [2024-07-11 21:37:07.004184] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.161 21:37:07 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:46.418 malloc0 00:23:46.419 21:37:07 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:46.676 21:37:07 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:46.933 21:37:07 -- target/tls.sh@197 -- # bdevperf_pid=77557 00:23:46.933 21:37:07 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.933 21:37:07 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.933 21:37:07 -- target/tls.sh@200 -- # waitforlisten 77557 /var/tmp/bdevperf.sock 00:23:46.933 21:37:07 -- common/autotest_common.sh@819 -- # '[' -z 77557 ']' 00:23:46.933 21:37:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.933 21:37:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:46.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.933 21:37:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.933 21:37:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:46.933 21:37:07 -- common/autotest_common.sh@10 -- # set +x 00:23:46.933 [2024-07-11 21:37:07.792511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:46.933 [2024-07-11 21:37:07.792613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77557 ] 00:23:47.191 [2024-07-11 21:37:07.930938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.191 [2024-07-11 21:37:08.028670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.133 21:37:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:48.133 21:37:08 -- common/autotest_common.sh@852 -- # return 0 00:23:48.133 21:37:08 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:23:48.133 [2024-07-11 21:37:08.987970] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.133 TLSTESTn1 00:23:48.133 21:37:09 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:48.699 21:37:09 -- target/tls.sh@205 -- # tgtconf='{ 00:23:48.699 "subsystems": [ 00:23:48.699 { 00:23:48.699 "subsystem": "iobuf", 00:23:48.699 "config": [ 00:23:48.699 { 00:23:48.699 "method": "iobuf_set_options", 00:23:48.699 "params": { 00:23:48.699 "small_pool_count": 8192, 00:23:48.699 "large_pool_count": 1024, 00:23:48.699 "small_bufsize": 8192, 00:23:48.699 "large_bufsize": 135168 00:23:48.699 } 00:23:48.699 } 00:23:48.699 ] 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "subsystem": "sock", 00:23:48.699 "config": [ 00:23:48.699 { 00:23:48.699 "method": "sock_impl_set_options", 00:23:48.699 "params": { 00:23:48.699 "impl_name": "uring", 00:23:48.699 "recv_buf_size": 2097152, 00:23:48.699 "send_buf_size": 2097152, 00:23:48.699 "enable_recv_pipe": true, 00:23:48.699 "enable_quickack": false, 00:23:48.699 "enable_placement_id": 0, 00:23:48.699 "enable_zerocopy_send_server": false, 00:23:48.699 "enable_zerocopy_send_client": false, 00:23:48.699 "zerocopy_threshold": 0, 00:23:48.699 "tls_version": 0, 00:23:48.699 "enable_ktls": false 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "sock_impl_set_options", 00:23:48.699 "params": { 00:23:48.699 "impl_name": "posix", 00:23:48.699 "recv_buf_size": 2097152, 00:23:48.699 "send_buf_size": 2097152, 00:23:48.699 "enable_recv_pipe": true, 00:23:48.699 "enable_quickack": false, 00:23:48.699 "enable_placement_id": 0, 00:23:48.699 "enable_zerocopy_send_server": true, 00:23:48.699 "enable_zerocopy_send_client": false, 00:23:48.699 "zerocopy_threshold": 0, 00:23:48.699 "tls_version": 0, 00:23:48.699 "enable_ktls": false 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "sock_impl_set_options", 00:23:48.699 "params": { 00:23:48.699 "impl_name": "ssl", 00:23:48.699 "recv_buf_size": 4096, 00:23:48.699 "send_buf_size": 4096, 00:23:48.699 "enable_recv_pipe": true, 00:23:48.699 "enable_quickack": false, 00:23:48.699 "enable_placement_id": 0, 00:23:48.699 "enable_zerocopy_send_server": true, 00:23:48.699 "enable_zerocopy_send_client": false, 00:23:48.699 "zerocopy_threshold": 0, 00:23:48.699 "tls_version": 0, 00:23:48.699 "enable_ktls": false 00:23:48.699 } 00:23:48.699 } 00:23:48.699 ] 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "subsystem": "vmd", 00:23:48.699 "config": [] 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "subsystem": "accel", 00:23:48.699 "config": [ 00:23:48.699 { 00:23:48.699 "method": "accel_set_options", 00:23:48.699 "params": { 00:23:48.699 "small_cache_size": 128, 00:23:48.699 "large_cache_size": 16, 00:23:48.699 "task_count": 2048, 00:23:48.699 "sequence_count": 2048, 00:23:48.699 "buf_count": 2048 00:23:48.699 } 00:23:48.699 } 00:23:48.699 ] 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "subsystem": "bdev", 00:23:48.699 "config": [ 00:23:48.699 { 00:23:48.699 "method": "bdev_set_options", 00:23:48.699 "params": { 00:23:48.699 "bdev_io_pool_size": 65535, 00:23:48.699 "bdev_io_cache_size": 256, 00:23:48.699 "bdev_auto_examine": true, 00:23:48.699 "iobuf_small_cache_size": 128, 00:23:48.699 "iobuf_large_cache_size": 16 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "bdev_raid_set_options", 00:23:48.699 "params": { 00:23:48.699 "process_window_size_kb": 1024 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "bdev_iscsi_set_options", 00:23:48.699 "params": { 00:23:48.699 "timeout_sec": 30 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "bdev_nvme_set_options", 00:23:48.699 "params": { 00:23:48.699 "action_on_timeout": "none", 00:23:48.699 "timeout_us": 0, 00:23:48.699 "timeout_admin_us": 0, 00:23:48.699 "keep_alive_timeout_ms": 10000, 00:23:48.699 "transport_retry_count": 4, 00:23:48.699 "arbitration_burst": 0, 00:23:48.699 "low_priority_weight": 0, 00:23:48.699 "medium_priority_weight": 0, 00:23:48.699 "high_priority_weight": 0, 00:23:48.699 "nvme_adminq_poll_period_us": 10000, 00:23:48.699 "nvme_ioq_poll_period_us": 0, 00:23:48.699 "io_queue_requests": 0, 00:23:48.699 "delay_cmd_submit": true, 00:23:48.699 "bdev_retry_count": 3, 00:23:48.699 "transport_ack_timeout": 0, 00:23:48.699 "ctrlr_loss_timeout_sec": 0, 00:23:48.699 "reconnect_delay_sec": 0, 00:23:48.699 "fast_io_fail_timeout_sec": 0, 00:23:48.699 "generate_uuids": false, 00:23:48.699 "transport_tos": 0, 00:23:48.699 "io_path_stat": false, 00:23:48.699 "allow_accel_sequence": false 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "bdev_nvme_set_hotplug", 00:23:48.699 "params": { 00:23:48.699 "period_us": 100000, 00:23:48.699 "enable": false 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "bdev_malloc_create", 00:23:48.699 "params": { 00:23:48.699 "name": "malloc0", 00:23:48.699 "num_blocks": 8192, 00:23:48.699 "block_size": 4096, 00:23:48.699 "physical_block_size": 4096, 00:23:48.699 "uuid": "c3493c0a-0db5-4293-b966-76666d13ae58", 00:23:48.699 "optimal_io_boundary": 0 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "bdev_wait_for_examine" 00:23:48.699 } 00:23:48.699 ] 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "subsystem": "nbd", 00:23:48.699 "config": [] 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "subsystem": "scheduler", 00:23:48.699 "config": [ 00:23:48.699 { 00:23:48.699 "method": "framework_set_scheduler", 00:23:48.699 "params": { 00:23:48.699 "name": "static" 00:23:48.699 } 00:23:48.699 } 00:23:48.699 ] 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "subsystem": "nvmf", 00:23:48.699 "config": [ 00:23:48.699 { 00:23:48.699 "method": "nvmf_set_config", 00:23:48.699 "params": { 00:23:48.699 "discovery_filter": "match_any", 00:23:48.699 "admin_cmd_passthru": { 00:23:48.699 "identify_ctrlr": false 00:23:48.699 } 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "nvmf_set_max_subsystems", 00:23:48.699 "params": { 00:23:48.699 "max_subsystems": 1024 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "nvmf_set_crdt", 00:23:48.699 "params": { 00:23:48.699 "crdt1": 0, 00:23:48.699 "crdt2": 0, 00:23:48.699 "crdt3": 0 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "nvmf_create_transport", 00:23:48.699 "params": { 00:23:48.699 "trtype": "TCP", 00:23:48.699 "max_queue_depth": 128, 00:23:48.699 "max_io_qpairs_per_ctrlr": 127, 00:23:48.699 "in_capsule_data_size": 4096, 00:23:48.699 "max_io_size": 131072, 00:23:48.699 "io_unit_size": 131072, 00:23:48.699 "max_aq_depth": 128, 00:23:48.699 "num_shared_buffers": 511, 00:23:48.699 "buf_cache_size": 4294967295, 00:23:48.699 "dif_insert_or_strip": false, 00:23:48.699 "zcopy": false, 00:23:48.699 "c2h_success": false, 00:23:48.699 "sock_priority": 0, 00:23:48.699 "abort_timeout_sec": 1 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "nvmf_create_subsystem", 00:23:48.699 "params": { 00:23:48.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.699 "allow_any_host": false, 00:23:48.699 "serial_number": "SPDK00000000000001", 00:23:48.699 "model_number": "SPDK bdev Controller", 00:23:48.699 "max_namespaces": 10, 00:23:48.699 "min_cntlid": 1, 00:23:48.699 "max_cntlid": 65519, 00:23:48.699 "ana_reporting": false 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.699 "method": "nvmf_subsystem_add_host", 00:23:48.699 "params": { 00:23:48.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.699 "host": "nqn.2016-06.io.spdk:host1", 00:23:48.699 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:23:48.699 } 00:23:48.699 }, 00:23:48.699 { 00:23:48.700 "method": "nvmf_subsystem_add_ns", 00:23:48.700 "params": { 00:23:48.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.700 "namespace": { 00:23:48.700 "nsid": 1, 00:23:48.700 "bdev_name": "malloc0", 00:23:48.700 "nguid": "C3493C0A0DB54293B96676666D13AE58", 00:23:48.700 "uuid": "c3493c0a-0db5-4293-b966-76666d13ae58" 00:23:48.700 } 00:23:48.700 } 00:23:48.700 }, 00:23:48.700 { 00:23:48.700 "method": "nvmf_subsystem_add_listener", 00:23:48.700 "params": { 00:23:48.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.700 "listen_address": { 00:23:48.700 "trtype": "TCP", 00:23:48.700 "adrfam": "IPv4", 00:23:48.700 "traddr": "10.0.0.2", 00:23:48.700 "trsvcid": "4420" 00:23:48.700 }, 00:23:48.700 "secure_channel": true 00:23:48.700 } 00:23:48.700 } 00:23:48.700 ] 00:23:48.700 } 00:23:48.700 ] 00:23:48.700 }' 00:23:48.700 21:37:09 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:48.958 21:37:09 -- target/tls.sh@206 -- # bdevperfconf='{ 00:23:48.958 "subsystems": [ 00:23:48.958 { 00:23:48.958 "subsystem": "iobuf", 00:23:48.958 "config": [ 00:23:48.958 { 00:23:48.958 "method": "iobuf_set_options", 00:23:48.958 "params": { 00:23:48.958 "small_pool_count": 8192, 00:23:48.958 "large_pool_count": 1024, 00:23:48.958 "small_bufsize": 8192, 00:23:48.958 "large_bufsize": 135168 00:23:48.958 } 00:23:48.958 } 00:23:48.958 ] 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "subsystem": "sock", 00:23:48.958 "config": [ 00:23:48.958 { 00:23:48.958 "method": "sock_impl_set_options", 00:23:48.958 "params": { 00:23:48.958 "impl_name": "uring", 00:23:48.958 "recv_buf_size": 2097152, 00:23:48.958 "send_buf_size": 2097152, 00:23:48.958 "enable_recv_pipe": true, 00:23:48.958 "enable_quickack": false, 00:23:48.958 "enable_placement_id": 0, 00:23:48.958 "enable_zerocopy_send_server": false, 00:23:48.958 "enable_zerocopy_send_client": false, 00:23:48.958 "zerocopy_threshold": 0, 00:23:48.958 "tls_version": 0, 00:23:48.958 "enable_ktls": false 00:23:48.958 } 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "method": "sock_impl_set_options", 00:23:48.958 "params": { 00:23:48.958 "impl_name": "posix", 00:23:48.958 "recv_buf_size": 2097152, 00:23:48.958 "send_buf_size": 2097152, 00:23:48.958 "enable_recv_pipe": true, 00:23:48.958 "enable_quickack": false, 00:23:48.958 "enable_placement_id": 0, 00:23:48.958 "enable_zerocopy_send_server": true, 00:23:48.958 "enable_zerocopy_send_client": false, 00:23:48.958 "zerocopy_threshold": 0, 00:23:48.958 "tls_version": 0, 00:23:48.958 "enable_ktls": false 00:23:48.958 } 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "method": "sock_impl_set_options", 00:23:48.958 "params": { 00:23:48.958 "impl_name": "ssl", 00:23:48.958 "recv_buf_size": 4096, 00:23:48.958 "send_buf_size": 4096, 00:23:48.958 "enable_recv_pipe": true, 00:23:48.958 "enable_quickack": false, 00:23:48.958 "enable_placement_id": 0, 00:23:48.958 "enable_zerocopy_send_server": true, 00:23:48.958 "enable_zerocopy_send_client": false, 00:23:48.958 "zerocopy_threshold": 0, 00:23:48.958 "tls_version": 0, 00:23:48.958 "enable_ktls": false 00:23:48.958 } 00:23:48.958 } 00:23:48.958 ] 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "subsystem": "vmd", 00:23:48.958 "config": [] 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "subsystem": "accel", 00:23:48.958 "config": [ 00:23:48.958 { 00:23:48.958 "method": "accel_set_options", 00:23:48.958 "params": { 00:23:48.958 "small_cache_size": 128, 00:23:48.958 "large_cache_size": 16, 00:23:48.958 "task_count": 2048, 00:23:48.958 "sequence_count": 2048, 00:23:48.958 "buf_count": 2048 00:23:48.958 } 00:23:48.958 } 00:23:48.958 ] 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "subsystem": "bdev", 00:23:48.958 "config": [ 00:23:48.958 { 00:23:48.958 "method": "bdev_set_options", 00:23:48.958 "params": { 00:23:48.958 "bdev_io_pool_size": 65535, 00:23:48.958 "bdev_io_cache_size": 256, 00:23:48.958 "bdev_auto_examine": true, 00:23:48.958 "iobuf_small_cache_size": 128, 00:23:48.958 "iobuf_large_cache_size": 16 00:23:48.958 } 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "method": "bdev_raid_set_options", 00:23:48.958 "params": { 00:23:48.958 "process_window_size_kb": 1024 00:23:48.958 } 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "method": "bdev_iscsi_set_options", 00:23:48.958 "params": { 00:23:48.958 "timeout_sec": 30 00:23:48.958 } 00:23:48.958 }, 00:23:48.958 { 00:23:48.958 "method": "bdev_nvme_set_options", 00:23:48.958 "params": { 00:23:48.958 "action_on_timeout": "none", 00:23:48.958 "timeout_us": 0, 00:23:48.958 "timeout_admin_us": 0, 00:23:48.958 "keep_alive_timeout_ms": 10000, 00:23:48.958 "transport_retry_count": 4, 00:23:48.958 "arbitration_burst": 0, 00:23:48.958 "low_priority_weight": 0, 00:23:48.958 "medium_priority_weight": 0, 00:23:48.958 "high_priority_weight": 0, 00:23:48.958 "nvme_adminq_poll_period_us": 10000, 00:23:48.958 "nvme_ioq_poll_period_us": 0, 00:23:48.959 "io_queue_requests": 512, 00:23:48.959 "delay_cmd_submit": true, 00:23:48.959 "bdev_retry_count": 3, 00:23:48.959 "transport_ack_timeout": 0, 00:23:48.959 "ctrlr_loss_timeout_sec": 0, 00:23:48.959 "reconnect_delay_sec": 0, 00:23:48.959 "fast_io_fail_timeout_sec": 0, 00:23:48.959 "generate_uuids": false, 00:23:48.959 "transport_tos": 0, 00:23:48.959 "io_path_stat": false, 00:23:48.959 "allow_accel_sequence": false 00:23:48.959 } 00:23:48.959 }, 00:23:48.959 { 00:23:48.959 "method": "bdev_nvme_attach_controller", 00:23:48.959 "params": { 00:23:48.959 "name": "TLSTEST", 00:23:48.959 "trtype": "TCP", 00:23:48.959 "adrfam": "IPv4", 00:23:48.959 "traddr": "10.0.0.2", 00:23:48.959 "trsvcid": "4420", 00:23:48.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.959 "prchk_reftag": false, 00:23:48.959 "prchk_guard": false, 00:23:48.959 "ctrlr_loss_timeout_sec": 0, 00:23:48.959 "reconnect_delay_sec": 0, 00:23:48.959 "fast_io_fail_timeout_sec": 0, 00:23:48.959 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:23:48.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.959 "hdgst": false, 00:23:48.959 "ddgst": false 00:23:48.959 } 00:23:48.959 }, 00:23:48.959 { 00:23:48.959 "method": "bdev_nvme_set_hotplug", 00:23:48.959 "params": { 00:23:48.959 "period_us": 100000, 00:23:48.959 "enable": false 00:23:48.959 } 00:23:48.959 }, 00:23:48.959 { 00:23:48.959 "method": "bdev_wait_for_examine" 00:23:48.959 } 00:23:48.959 ] 00:23:48.959 }, 00:23:48.959 { 00:23:48.959 "subsystem": "nbd", 00:23:48.959 "config": [] 00:23:48.959 } 00:23:48.959 ] 00:23:48.959 }' 00:23:48.959 21:37:09 -- target/tls.sh@208 -- # killprocess 77557 00:23:48.959 21:37:09 -- common/autotest_common.sh@926 -- # '[' -z 77557 ']' 00:23:48.959 21:37:09 -- common/autotest_common.sh@930 -- # kill -0 77557 00:23:48.959 21:37:09 -- common/autotest_common.sh@931 -- # uname 00:23:48.959 21:37:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:48.959 21:37:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77557 00:23:48.959 21:37:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:48.959 21:37:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:48.959 killing process with pid 77557 00:23:48.959 21:37:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77557' 00:23:48.959 21:37:09 -- common/autotest_common.sh@945 -- # kill 77557 00:23:48.959 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.959 00:23:48.959 Latency(us) 00:23:48.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.959 =================================================================================================================== 00:23:48.959 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:48.959 21:37:09 -- common/autotest_common.sh@950 -- # wait 77557 00:23:49.216 21:37:09 -- target/tls.sh@209 -- # killprocess 77502 00:23:49.216 21:37:09 -- common/autotest_common.sh@926 -- # '[' -z 77502 ']' 00:23:49.216 21:37:09 -- common/autotest_common.sh@930 -- # kill -0 77502 00:23:49.216 21:37:09 -- common/autotest_common.sh@931 -- # uname 00:23:49.216 21:37:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:49.216 21:37:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77502 00:23:49.216 21:37:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:49.216 21:37:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:49.216 killing process with pid 77502 00:23:49.216 21:37:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77502' 00:23:49.216 21:37:10 -- common/autotest_common.sh@945 -- # kill 77502 00:23:49.216 21:37:10 -- common/autotest_common.sh@950 -- # wait 77502 00:23:49.473 21:37:10 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:49.473 21:37:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:49.474 21:37:10 -- target/tls.sh@212 -- # echo '{ 00:23:49.474 "subsystems": [ 00:23:49.474 { 00:23:49.474 "subsystem": "iobuf", 00:23:49.474 "config": [ 00:23:49.474 { 00:23:49.474 "method": "iobuf_set_options", 00:23:49.474 "params": { 00:23:49.474 "small_pool_count": 8192, 00:23:49.474 "large_pool_count": 1024, 00:23:49.474 "small_bufsize": 8192, 00:23:49.474 "large_bufsize": 135168 00:23:49.474 } 00:23:49.474 } 00:23:49.474 ] 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "subsystem": "sock", 00:23:49.474 "config": [ 00:23:49.474 { 00:23:49.474 "method": "sock_impl_set_options", 00:23:49.474 "params": { 00:23:49.474 "impl_name": "uring", 00:23:49.474 "recv_buf_size": 2097152, 00:23:49.474 "send_buf_size": 2097152, 00:23:49.474 "enable_recv_pipe": true, 00:23:49.474 "enable_quickack": false, 00:23:49.474 "enable_placement_id": 0, 00:23:49.474 "enable_zerocopy_send_server": false, 00:23:49.474 "enable_zerocopy_send_client": false, 00:23:49.474 "zerocopy_threshold": 0, 00:23:49.474 "tls_version": 0, 00:23:49.474 "enable_ktls": false 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "sock_impl_set_options", 00:23:49.474 "params": { 00:23:49.474 "impl_name": "posix", 00:23:49.474 "recv_buf_size": 2097152, 00:23:49.474 "send_buf_size": 2097152, 00:23:49.474 "enable_recv_pipe": true, 00:23:49.474 "enable_quickack": false, 00:23:49.474 "enable_placement_id": 0, 00:23:49.474 "enable_zerocopy_send_server": true, 00:23:49.474 "enable_zerocopy_send_client": false, 00:23:49.474 "zerocopy_threshold": 0, 00:23:49.474 "tls_version": 0, 00:23:49.474 "enable_ktls": false 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "sock_impl_set_options", 00:23:49.474 "params": { 00:23:49.474 "impl_name": "ssl", 00:23:49.474 "recv_buf_size": 4096, 00:23:49.474 "send_buf_size": 4096, 00:23:49.474 "enable_recv_pipe": true, 00:23:49.474 "enable_quickack": false, 00:23:49.474 "enable_placement_id": 0, 00:23:49.474 "enable_zerocopy_send_server": true, 00:23:49.474 "enable_zerocopy_send_client": false, 00:23:49.474 "zerocopy_threshold": 0, 00:23:49.474 "tls_version": 0, 00:23:49.474 "enable_ktls": false 00:23:49.474 } 00:23:49.474 } 00:23:49.474 ] 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "subsystem": "vmd", 00:23:49.474 "config": [] 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "subsystem": "accel", 00:23:49.474 "config": [ 00:23:49.474 { 00:23:49.474 "method": "accel_set_options", 00:23:49.474 "params": { 00:23:49.474 "small_cache_size": 128, 00:23:49.474 "large_cache_size": 16, 00:23:49.474 "task_count": 2048, 00:23:49.474 "sequence_count": 2048, 00:23:49.474 "buf_count": 2048 00:23:49.474 } 00:23:49.474 } 00:23:49.474 ] 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "subsystem": "bdev", 00:23:49.474 "config": [ 00:23:49.474 { 00:23:49.474 "method": "bdev_set_options", 00:23:49.474 "params": { 00:23:49.474 "bdev_io_pool_size": 65535, 00:23:49.474 "bdev_io_cache_size": 256, 00:23:49.474 "bdev_auto_examine": true, 00:23:49.474 "iobuf_small_cache_size": 128, 00:23:49.474 "iobuf_large_cache_size": 16 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "bdev_raid_set_options", 00:23:49.474 "params": { 00:23:49.474 "process_window_size_kb": 1024 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "bdev_iscsi_set_options", 00:23:49.474 "params": { 00:23:49.474 "timeout_sec": 30 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "bdev_nvme_set_options", 00:23:49.474 "params": { 00:23:49.474 "action_on_timeout": "none", 00:23:49.474 "timeout_us": 0, 00:23:49.474 "timeout_admin_us": 0, 00:23:49.474 "keep_alive_timeout_ms": 10000, 00:23:49.474 "transport_retry_count": 4, 00:23:49.474 "arbitration_burst": 0, 00:23:49.474 "low_priority_weight": 0, 00:23:49.474 "medium_priority_weight": 0, 00:23:49.474 "high_priority_weight": 0, 00:23:49.474 "nvme_adminq_poll_period_us": 10000, 00:23:49.474 "nvme_ioq_poll_period_us": 0, 00:23:49.474 "io_queue_requests": 0, 00:23:49.474 "delay_cmd_submit": true, 00:23:49.474 "bdev_retry_count": 3, 00:23:49.474 "transport_ack_timeout": 0, 00:23:49.474 "ctrlr_loss_timeout_sec": 0, 00:23:49.474 "reconnect_delay_sec": 0, 00:23:49.474 "fast_io_fail_timeout_sec": 0, 00:23:49.474 "generate_uuids": false, 00:23:49.474 "transport_tos": 0, 00:23:49.474 "io_path_stat": false, 00:23:49.474 "allow_accel_sequence": false 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "bdev_nvme_set_hotplug", 00:23:49.474 "params": { 00:23:49.474 "period_us": 100000, 00:23:49.474 "enable": false 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "bdev_malloc_create", 00:23:49.474 "params": { 00:23:49.474 "name": "malloc0", 00:23:49.474 "num_blocks": 8192, 00:23:49.474 "block_size": 4096, 00:23:49.474 "physical_block_size": 4096, 00:23:49.474 "uuid": "c3493c0a-0db5-4293-b966-76666d13ae58", 00:23:49.474 "optimal_io_boundary": 0 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "bdev_wait_for_examine" 00:23:49.474 } 00:23:49.474 ] 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "subsystem": "nbd", 00:23:49.474 "config": [] 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "subsystem": "scheduler", 00:23:49.474 "config": [ 00:23:49.474 { 00:23:49.474 "method": "framework_set_scheduler", 00:23:49.474 "params": { 00:23:49.474 "name": "static" 00:23:49.474 } 00:23:49.474 } 00:23:49.474 ] 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "subsystem": "nvmf", 00:23:49.474 "config": [ 00:23:49.474 { 00:23:49.474 "method": "nvmf_set_config", 00:23:49.474 "params": { 00:23:49.474 "discovery_filter": "match_any", 00:23:49.474 "admin_cmd_passthru": { 00:23:49.474 "identify_ctrlr": false 00:23:49.474 } 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "nvmf_set_max_subsystems", 00:23:49.474 "params": { 00:23:49.474 "max_subsystems": 1024 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "nvmf_set_crdt", 00:23:49.474 "params": { 00:23:49.474 "crdt1": 0, 00:23:49.474 "crdt2": 0, 00:23:49.474 "crdt3": 0 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "nvmf_create_transport", 00:23:49.474 "params": { 00:23:49.474 "trtype": "TCP", 00:23:49.474 "max_queue_depth": 128, 00:23:49.474 "max_io_qpairs_per_ctrlr": 127, 00:23:49.474 "in_capsule_data_size": 4096, 00:23:49.474 "max_io_size": 131072, 00:23:49.474 "io_unit_size": 131072, 00:23:49.474 "max_aq_depth": 128, 00:23:49.474 "num_shared_buffers": 511, 00:23:49.474 "buf_cache_size": 4294967295, 00:23:49.474 "dif_insert_or_strip": false, 00:23:49.474 "zcopy": false, 00:23:49.474 "c2h_success": false, 00:23:49.474 "sock_priority": 0, 00:23:49.474 "abort_timeout_sec": 1 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "nvmf_create_subsystem", 00:23:49.474 "params": { 00:23:49.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.474 "allow_any_host": false, 00:23:49.474 "serial_number": "SPDK00000000000001", 00:23:49.474 "model_number": "SPDK bdev Controller", 00:23:49.474 "max_namespaces": 10, 00:23:49.474 "min_cntlid": 1, 00:23:49.474 "max_cntlid": 65519, 00:23:49.474 "ana_reporting": false 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "nvmf_subsystem_add_host", 00:23:49.474 "params": { 00:23:49.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.474 "host": "nqn.2016-06.io.spdk:host1", 00:23:49.474 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:23:49.474 } 00:23:49.474 }, 00:23:49.474 { 00:23:49.474 "method": "nvmf_subsystem_add_ns", 00:23:49.475 "params": { 00:23:49.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.475 "namespace": { 00:23:49.475 "nsid": 1, 00:23:49.475 "bdev_name": "malloc0", 00:23:49.475 "nguid": "C3493C0A0DB54293B96676666D13AE58", 00:23:49.475 "uuid": "c3493c0a-0db5-4293-b966-76666d13ae58" 00:23:49.475 } 00:23:49.475 } 00:23:49.475 }, 00:23:49.475 { 00:23:49.475 "method": "nvmf_subsystem_add_listener", 00:23:49.475 "params": { 00:23:49.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.475 "listen_address": { 00:23:49.475 "trtype": "TCP", 00:23:49.475 "adrfam": "IPv4", 00:23:49.475 "traddr": "10.0.0.2", 00:23:49.475 "trsvcid": "4420" 00:23:49.475 }, 00:23:49.475 "secure_channel": true 00:23:49.475 } 00:23:49.475 } 00:23:49.475 ] 00:23:49.475 } 00:23:49.475 ] 00:23:49.475 }' 00:23:49.475 21:37:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:49.475 21:37:10 -- common/autotest_common.sh@10 -- # set +x 00:23:49.475 21:37:10 -- nvmf/common.sh@469 -- # nvmfpid=77600 00:23:49.475 21:37:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:49.475 21:37:10 -- nvmf/common.sh@470 -- # waitforlisten 77600 00:23:49.475 21:37:10 -- common/autotest_common.sh@819 -- # '[' -z 77600 ']' 00:23:49.475 21:37:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.475 21:37:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:49.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.475 21:37:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.475 21:37:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:49.475 21:37:10 -- common/autotest_common.sh@10 -- # set +x 00:23:49.475 [2024-07-11 21:37:10.279187] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:49.475 [2024-07-11 21:37:10.279284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.475 [2024-07-11 21:37:10.411089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.732 [2024-07-11 21:37:10.504379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:49.732 [2024-07-11 21:37:10.504566] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.732 [2024-07-11 21:37:10.504581] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.732 [2024-07-11 21:37:10.504591] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.732 [2024-07-11 21:37:10.504624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.989 [2024-07-11 21:37:10.729966] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.990 [2024-07-11 21:37:10.761906] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.990 [2024-07-11 21:37:10.762145] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.554 21:37:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:50.554 21:37:11 -- common/autotest_common.sh@852 -- # return 0 00:23:50.554 21:37:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:50.554 21:37:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:50.554 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.554 21:37:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.554 21:37:11 -- target/tls.sh@216 -- # bdevperf_pid=77632 00:23:50.554 21:37:11 -- target/tls.sh@217 -- # waitforlisten 77632 /var/tmp/bdevperf.sock 00:23:50.554 21:37:11 -- common/autotest_common.sh@819 -- # '[' -z 77632 ']' 00:23:50.554 21:37:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.554 21:37:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:50.554 21:37:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.554 21:37:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:50.554 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.554 21:37:11 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:50.554 21:37:11 -- target/tls.sh@213 -- # echo '{ 00:23:50.554 "subsystems": [ 00:23:50.554 { 00:23:50.554 "subsystem": "iobuf", 00:23:50.554 "config": [ 00:23:50.554 { 00:23:50.554 "method": "iobuf_set_options", 00:23:50.554 "params": { 00:23:50.554 "small_pool_count": 8192, 00:23:50.554 "large_pool_count": 1024, 00:23:50.554 "small_bufsize": 8192, 00:23:50.554 "large_bufsize": 135168 00:23:50.554 } 00:23:50.554 } 00:23:50.554 ] 00:23:50.554 }, 00:23:50.554 { 00:23:50.554 "subsystem": "sock", 00:23:50.554 "config": [ 00:23:50.554 { 00:23:50.554 "method": "sock_impl_set_options", 00:23:50.554 "params": { 00:23:50.554 "impl_name": "uring", 00:23:50.554 "recv_buf_size": 2097152, 00:23:50.554 "send_buf_size": 2097152, 00:23:50.554 "enable_recv_pipe": true, 00:23:50.554 "enable_quickack": false, 00:23:50.554 "enable_placement_id": 0, 00:23:50.554 "enable_zerocopy_send_server": false, 00:23:50.554 "enable_zerocopy_send_client": false, 00:23:50.554 "zerocopy_threshold": 0, 00:23:50.554 "tls_version": 0, 00:23:50.555 "enable_ktls": false 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "sock_impl_set_options", 00:23:50.555 "params": { 00:23:50.555 "impl_name": "posix", 00:23:50.555 "recv_buf_size": 2097152, 00:23:50.555 "send_buf_size": 2097152, 00:23:50.555 "enable_recv_pipe": true, 00:23:50.555 "enable_quickack": false, 00:23:50.555 "enable_placement_id": 0, 00:23:50.555 "enable_zerocopy_send_server": true, 00:23:50.555 "enable_zerocopy_send_client": false, 00:23:50.555 "zerocopy_threshold": 0, 00:23:50.555 "tls_version": 0, 00:23:50.555 "enable_ktls": false 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "sock_impl_set_options", 00:23:50.555 "params": { 00:23:50.555 "impl_name": "ssl", 00:23:50.555 "recv_buf_size": 4096, 00:23:50.555 "send_buf_size": 4096, 00:23:50.555 "enable_recv_pipe": true, 00:23:50.555 "enable_quickack": false, 00:23:50.555 "enable_placement_id": 0, 00:23:50.555 "enable_zerocopy_send_server": true, 00:23:50.555 "enable_zerocopy_send_client": false, 00:23:50.555 "zerocopy_threshold": 0, 00:23:50.555 "tls_version": 0, 00:23:50.555 "enable_ktls": false 00:23:50.555 } 00:23:50.555 } 00:23:50.555 ] 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "subsystem": "vmd", 00:23:50.555 "config": [] 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "subsystem": "accel", 00:23:50.555 "config": [ 00:23:50.555 { 00:23:50.555 "method": "accel_set_options", 00:23:50.555 "params": { 00:23:50.555 "small_cache_size": 128, 00:23:50.555 "large_cache_size": 16, 00:23:50.555 "task_count": 2048, 00:23:50.555 "sequence_count": 2048, 00:23:50.555 "buf_count": 2048 00:23:50.555 } 00:23:50.555 } 00:23:50.555 ] 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "subsystem": "bdev", 00:23:50.555 "config": [ 00:23:50.555 { 00:23:50.555 "method": "bdev_set_options", 00:23:50.555 "params": { 00:23:50.555 "bdev_io_pool_size": 65535, 00:23:50.555 "bdev_io_cache_size": 256, 00:23:50.555 "bdev_auto_examine": true, 00:23:50.555 "iobuf_small_cache_size": 128, 00:23:50.555 "iobuf_large_cache_size": 16 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "bdev_raid_set_options", 00:23:50.555 "params": { 00:23:50.555 "process_window_size_kb": 1024 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "bdev_iscsi_set_options", 00:23:50.555 "params": { 00:23:50.555 "timeout_sec": 30 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "bdev_nvme_set_options", 00:23:50.555 "params": { 00:23:50.555 "action_on_timeout": "none", 00:23:50.555 "timeout_us": 0, 00:23:50.555 "timeout_admin_us": 0, 00:23:50.555 "keep_alive_timeout_ms": 10000, 00:23:50.555 "transport_retry_count": 4, 00:23:50.555 "arbitration_burst": 0, 00:23:50.555 "low_priority_weight": 0, 00:23:50.555 "medium_priority_weight": 0, 00:23:50.555 "high_priority_weight": 0, 00:23:50.555 "nvme_adminq_poll_period_us": 10000, 00:23:50.555 "nvme_ioq_poll_period_us": 0, 00:23:50.555 "io_queue_requests": 512, 00:23:50.555 "delay_cmd_submit": true, 00:23:50.555 "bdev_retry_count": 3, 00:23:50.555 "transport_ack_timeout": 0, 00:23:50.555 "ctrlr_loss_timeout_sec": 0, 00:23:50.555 "reconnect_delay_sec": 0, 00:23:50.555 "fast_io_fail_timeout_sec": 0, 00:23:50.555 "generate_uuids": false, 00:23:50.555 "transport_tos": 0, 00:23:50.555 "io_path_stat": false, 00:23:50.555 "allow_accel_sequence": false 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "bdev_nvme_attach_controller", 00:23:50.555 "params": { 00:23:50.555 "name": "TLSTEST", 00:23:50.555 "trtype": "TCP", 00:23:50.555 "adrfam": "IPv4", 00:23:50.555 "traddr": "10.0.0.2", 00:23:50.555 "trsvcid": "4420", 00:23:50.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.555 "prchk_reftag": false, 00:23:50.555 "prchk_guard": false, 00:23:50.555 "ctrlr_loss_timeout_sec": 0, 00:23:50.555 "reconnect_delay_sec": 0, 00:23:50.555 "fast_io_fail_timeout_sec": 0, 00:23:50.555 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:23:50.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.555 "hdgst": false, 00:23:50.555 "ddgst": false 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "bdev_nvme_set_hotplug", 00:23:50.555 "params": { 00:23:50.555 "period_us": 100000, 00:23:50.555 "enable": false 00:23:50.555 } 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "method": "bdev_wait_for_examine" 00:23:50.555 } 00:23:50.555 ] 00:23:50.555 }, 00:23:50.555 { 00:23:50.555 "subsystem": "nbd", 00:23:50.555 "config": [] 00:23:50.555 } 00:23:50.555 ] 00:23:50.555 }' 00:23:50.555 [2024-07-11 21:37:11.320785] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:50.555 [2024-07-11 21:37:11.320884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77632 ] 00:23:50.555 [2024-07-11 21:37:11.457424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.813 [2024-07-11 21:37:11.552754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.813 [2024-07-11 21:37:11.716804] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.378 21:37:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:51.378 21:37:12 -- common/autotest_common.sh@852 -- # return 0 00:23:51.378 21:37:12 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:51.636 Running I/O for 10 seconds... 00:24:01.648 00:24:01.649 Latency(us) 00:24:01.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.649 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.649 Verification LBA range: start 0x0 length 0x2000 00:24:01.649 TLSTESTn1 : 10.01 5509.65 21.52 0.00 0.00 23193.91 5630.14 27405.96 00:24:01.649 =================================================================================================================== 00:24:01.649 Total : 5509.65 21.52 0.00 0.00 23193.91 5630.14 27405.96 00:24:01.649 0 00:24:01.649 21:37:22 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.649 21:37:22 -- target/tls.sh@223 -- # killprocess 77632 00:24:01.649 21:37:22 -- common/autotest_common.sh@926 -- # '[' -z 77632 ']' 00:24:01.649 21:37:22 -- common/autotest_common.sh@930 -- # kill -0 77632 00:24:01.649 21:37:22 -- common/autotest_common.sh@931 -- # uname 00:24:01.649 21:37:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:01.649 21:37:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77632 00:24:01.649 killing process with pid 77632 00:24:01.649 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.649 00:24:01.649 Latency(us) 00:24:01.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.649 =================================================================================================================== 00:24:01.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.649 21:37:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:01.649 21:37:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:01.649 21:37:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77632' 00:24:01.649 21:37:22 -- common/autotest_common.sh@945 -- # kill 77632 00:24:01.649 21:37:22 -- common/autotest_common.sh@950 -- # wait 77632 00:24:01.907 21:37:22 -- target/tls.sh@224 -- # killprocess 77600 00:24:01.907 21:37:22 -- common/autotest_common.sh@926 -- # '[' -z 77600 ']' 00:24:01.907 21:37:22 -- common/autotest_common.sh@930 -- # kill -0 77600 00:24:01.907 21:37:22 -- common/autotest_common.sh@931 -- # uname 00:24:01.907 21:37:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:01.907 21:37:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77600 00:24:01.907 killing process with pid 77600 00:24:01.907 21:37:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:01.907 21:37:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:01.907 21:37:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77600' 00:24:01.907 21:37:22 -- common/autotest_common.sh@945 -- # kill 77600 00:24:01.907 21:37:22 -- common/autotest_common.sh@950 -- # wait 77600 00:24:02.165 21:37:22 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:24:02.165 21:37:22 -- target/tls.sh@227 -- # cleanup 00:24:02.165 21:37:22 -- target/tls.sh@15 -- # process_shm --id 0 00:24:02.165 21:37:22 -- common/autotest_common.sh@796 -- # type=--id 00:24:02.165 21:37:22 -- common/autotest_common.sh@797 -- # id=0 00:24:02.165 21:37:22 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:24:02.165 21:37:22 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:02.165 21:37:22 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:24:02.165 21:37:22 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:24:02.165 21:37:22 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:24:02.165 21:37:22 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:02.165 nvmf_trace.0 00:24:02.165 Process with pid 77632 is not found 00:24:02.165 21:37:22 -- common/autotest_common.sh@811 -- # return 0 00:24:02.165 21:37:22 -- target/tls.sh@16 -- # killprocess 77632 00:24:02.165 21:37:22 -- common/autotest_common.sh@926 -- # '[' -z 77632 ']' 00:24:02.165 21:37:22 -- common/autotest_common.sh@930 -- # kill -0 77632 00:24:02.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77632) - No such process 00:24:02.165 21:37:22 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77632 is not found' 00:24:02.165 21:37:22 -- target/tls.sh@17 -- # nvmftestfini 00:24:02.165 21:37:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:02.165 21:37:22 -- nvmf/common.sh@116 -- # sync 00:24:02.165 21:37:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:02.165 21:37:23 -- nvmf/common.sh@119 -- # set +e 00:24:02.165 21:37:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:02.165 21:37:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:02.165 rmmod nvme_tcp 00:24:02.165 rmmod nvme_fabrics 00:24:02.165 rmmod nvme_keyring 00:24:02.165 21:37:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:02.165 21:37:23 -- nvmf/common.sh@123 -- # set -e 00:24:02.165 21:37:23 -- nvmf/common.sh@124 -- # return 0 00:24:02.165 21:37:23 -- nvmf/common.sh@477 -- # '[' -n 77600 ']' 00:24:02.165 21:37:23 -- nvmf/common.sh@478 -- # killprocess 77600 00:24:02.165 21:37:23 -- common/autotest_common.sh@926 -- # '[' -z 77600 ']' 00:24:02.165 21:37:23 -- common/autotest_common.sh@930 -- # kill -0 77600 00:24:02.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77600) - No such process 00:24:02.165 Process with pid 77600 is not found 00:24:02.165 21:37:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77600 is not found' 00:24:02.165 21:37:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:02.165 21:37:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:02.165 21:37:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:02.165 21:37:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.165 21:37:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:02.165 21:37:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.165 21:37:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.165 21:37:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.165 21:37:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:02.423 21:37:23 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:24:02.423 00:24:02.423 real 1m10.613s 00:24:02.423 user 1m49.928s 00:24:02.423 sys 0m24.300s 00:24:02.423 21:37:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.423 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:24:02.423 ************************************ 00:24:02.423 END TEST nvmf_tls 00:24:02.423 ************************************ 00:24:02.423 21:37:23 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:02.423 21:37:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:02.423 21:37:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:02.423 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:24:02.423 ************************************ 00:24:02.423 START TEST nvmf_fips 00:24:02.423 ************************************ 00:24:02.423 21:37:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:02.423 * Looking for test storage... 00:24:02.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:24:02.423 21:37:23 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:02.423 21:37:23 -- nvmf/common.sh@7 -- # uname -s 00:24:02.423 21:37:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.423 21:37:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.423 21:37:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.423 21:37:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.423 21:37:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.423 21:37:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.423 21:37:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.423 21:37:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.423 21:37:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.423 21:37:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.423 21:37:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:24:02.423 21:37:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:24:02.423 21:37:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.423 21:37:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.423 21:37:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:02.423 21:37:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.423 21:37:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.423 21:37:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.423 21:37:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.423 21:37:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.424 21:37:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.424 21:37:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.424 21:37:23 -- paths/export.sh@5 -- # export PATH 00:24:02.424 21:37:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.424 21:37:23 -- nvmf/common.sh@46 -- # : 0 00:24:02.424 21:37:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.424 21:37:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.424 21:37:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.424 21:37:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.424 21:37:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.424 21:37:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.424 21:37:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.424 21:37:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.424 21:37:23 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:02.424 21:37:23 -- fips/fips.sh@89 -- # check_openssl_version 00:24:02.424 21:37:23 -- fips/fips.sh@83 -- # local target=3.0.0 00:24:02.424 21:37:23 -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:02.424 21:37:23 -- fips/fips.sh@85 -- # openssl version 00:24:02.424 21:37:23 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:02.424 21:37:23 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:02.424 21:37:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:02.424 21:37:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:02.424 21:37:23 -- scripts/common.sh@335 -- # IFS=.-: 00:24:02.424 21:37:23 -- scripts/common.sh@335 -- # read -ra ver1 00:24:02.424 21:37:23 -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.424 21:37:23 -- scripts/common.sh@336 -- # read -ra ver2 00:24:02.424 21:37:23 -- scripts/common.sh@337 -- # local 'op=>=' 00:24:02.424 21:37:23 -- scripts/common.sh@339 -- # ver1_l=3 00:24:02.424 21:37:23 -- scripts/common.sh@340 -- # ver2_l=3 00:24:02.424 21:37:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:02.424 21:37:23 -- scripts/common.sh@343 -- # case "$op" in 00:24:02.424 21:37:23 -- scripts/common.sh@347 -- # : 1 00:24:02.424 21:37:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:02.424 21:37:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.424 21:37:23 -- scripts/common.sh@364 -- # decimal 3 00:24:02.424 21:37:23 -- scripts/common.sh@352 -- # local d=3 00:24:02.424 21:37:23 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:02.424 21:37:23 -- scripts/common.sh@354 -- # echo 3 00:24:02.424 21:37:23 -- scripts/common.sh@364 -- # ver1[v]=3 00:24:02.424 21:37:23 -- scripts/common.sh@365 -- # decimal 3 00:24:02.424 21:37:23 -- scripts/common.sh@352 -- # local d=3 00:24:02.424 21:37:23 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:02.424 21:37:23 -- scripts/common.sh@354 -- # echo 3 00:24:02.424 21:37:23 -- scripts/common.sh@365 -- # ver2[v]=3 00:24:02.424 21:37:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.424 21:37:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.424 21:37:23 -- scripts/common.sh@363 -- # (( v++ )) 00:24:02.424 21:37:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.424 21:37:23 -- scripts/common.sh@364 -- # decimal 0 00:24:02.424 21:37:23 -- scripts/common.sh@352 -- # local d=0 00:24:02.424 21:37:23 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.424 21:37:23 -- scripts/common.sh@354 -- # echo 0 00:24:02.424 21:37:23 -- scripts/common.sh@364 -- # ver1[v]=0 00:24:02.424 21:37:23 -- scripts/common.sh@365 -- # decimal 0 00:24:02.424 21:37:23 -- scripts/common.sh@352 -- # local d=0 00:24:02.424 21:37:23 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.424 21:37:23 -- scripts/common.sh@354 -- # echo 0 00:24:02.424 21:37:23 -- scripts/common.sh@365 -- # ver2[v]=0 00:24:02.424 21:37:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.424 21:37:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.424 21:37:23 -- scripts/common.sh@363 -- # (( v++ )) 00:24:02.424 21:37:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.424 21:37:23 -- scripts/common.sh@364 -- # decimal 9 00:24:02.424 21:37:23 -- scripts/common.sh@352 -- # local d=9 00:24:02.424 21:37:23 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:02.424 21:37:23 -- scripts/common.sh@354 -- # echo 9 00:24:02.424 21:37:23 -- scripts/common.sh@364 -- # ver1[v]=9 00:24:02.424 21:37:23 -- scripts/common.sh@365 -- # decimal 0 00:24:02.424 21:37:23 -- scripts/common.sh@352 -- # local d=0 00:24:02.424 21:37:23 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.424 21:37:23 -- scripts/common.sh@354 -- # echo 0 00:24:02.424 21:37:23 -- scripts/common.sh@365 -- # ver2[v]=0 00:24:02.424 21:37:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.424 21:37:23 -- scripts/common.sh@366 -- # return 0 00:24:02.424 21:37:23 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:02.424 21:37:23 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:02.424 21:37:23 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:02.424 21:37:23 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:02.424 21:37:23 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:02.424 21:37:23 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:02.424 21:37:23 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:02.424 21:37:23 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:24:02.424 21:37:23 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:24:02.424 21:37:23 -- fips/fips.sh@114 -- # build_openssl_config 00:24:02.424 21:37:23 -- fips/fips.sh@37 -- # cat 00:24:02.424 21:37:23 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:02.424 21:37:23 -- fips/fips.sh@58 -- # cat - 00:24:02.424 21:37:23 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:02.424 21:37:23 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:02.424 21:37:23 -- fips/fips.sh@117 -- # mapfile -t providers 00:24:02.424 21:37:23 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:24:02.424 21:37:23 -- fips/fips.sh@117 -- # openssl list -providers 00:24:02.424 21:37:23 -- fips/fips.sh@117 -- # grep name 00:24:02.682 21:37:23 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:02.682 21:37:23 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:02.682 21:37:23 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:02.682 21:37:23 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:02.682 21:37:23 -- common/autotest_common.sh@640 -- # local es=0 00:24:02.682 21:37:23 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:02.682 21:37:23 -- common/autotest_common.sh@628 -- # local arg=openssl 00:24:02.682 21:37:23 -- fips/fips.sh@128 -- # : 00:24:02.682 21:37:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:02.682 21:37:23 -- common/autotest_common.sh@632 -- # type -t openssl 00:24:02.682 21:37:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:02.682 21:37:23 -- common/autotest_common.sh@634 -- # type -P openssl 00:24:02.682 21:37:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:02.682 21:37:23 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:24:02.682 21:37:23 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:24:02.682 21:37:23 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:24:02.682 Error setting digest 00:24:02.682 00F25A729E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:02.682 00F25A729E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:02.682 21:37:23 -- common/autotest_common.sh@643 -- # es=1 00:24:02.682 21:37:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:02.682 21:37:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:02.682 21:37:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:02.682 21:37:23 -- fips/fips.sh@131 -- # nvmftestinit 00:24:02.682 21:37:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:02.682 21:37:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.682 21:37:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.682 21:37:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.682 21:37:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.682 21:37:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.682 21:37:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.682 21:37:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.682 21:37:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:02.682 21:37:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:02.682 21:37:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:02.682 21:37:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:02.682 21:37:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:02.682 21:37:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:02.682 21:37:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.682 21:37:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.682 21:37:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:02.682 21:37:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:02.682 21:37:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:02.682 21:37:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:02.682 21:37:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:02.682 21:37:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.682 21:37:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:02.682 21:37:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:02.682 21:37:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:02.682 21:37:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:02.682 21:37:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:02.682 21:37:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:02.682 Cannot find device "nvmf_tgt_br" 00:24:02.682 21:37:23 -- nvmf/common.sh@154 -- # true 00:24:02.682 21:37:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.682 Cannot find device "nvmf_tgt_br2" 00:24:02.682 21:37:23 -- nvmf/common.sh@155 -- # true 00:24:02.682 21:37:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:02.682 21:37:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:02.682 Cannot find device "nvmf_tgt_br" 00:24:02.682 21:37:23 -- nvmf/common.sh@157 -- # true 00:24:02.682 21:37:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:02.682 Cannot find device "nvmf_tgt_br2" 00:24:02.682 21:37:23 -- nvmf/common.sh@158 -- # true 00:24:02.682 21:37:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:02.682 21:37:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:02.682 21:37:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.682 21:37:23 -- nvmf/common.sh@161 -- # true 00:24:02.682 21:37:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.682 21:37:23 -- nvmf/common.sh@162 -- # true 00:24:02.682 21:37:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:02.682 21:37:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:02.682 21:37:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:02.682 21:37:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:02.682 21:37:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:02.682 21:37:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.940 21:37:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.940 21:37:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:02.940 21:37:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:02.940 21:37:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:02.940 21:37:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:02.940 21:37:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:02.940 21:37:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:02.940 21:37:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.940 21:37:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.940 21:37:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.940 21:37:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:02.940 21:37:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:02.940 21:37:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.940 21:37:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.940 21:37:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.940 21:37:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.941 21:37:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.941 21:37:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:02.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:02.941 00:24:02.941 --- 10.0.0.2 ping statistics --- 00:24:02.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.941 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:02.941 21:37:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:02.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:24:02.941 00:24:02.941 --- 10.0.0.3 ping statistics --- 00:24:02.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.941 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:02.941 21:37:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:02.941 00:24:02.941 --- 10.0.0.1 ping statistics --- 00:24:02.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.941 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:02.941 21:37:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.941 21:37:23 -- nvmf/common.sh@421 -- # return 0 00:24:02.941 21:37:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:02.941 21:37:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.941 21:37:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:02.941 21:37:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:02.941 21:37:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.941 21:37:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:02.941 21:37:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:02.941 21:37:23 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:02.941 21:37:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:02.941 21:37:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:02.941 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:24:02.941 21:37:23 -- nvmf/common.sh@469 -- # nvmfpid=77976 00:24:02.941 21:37:23 -- nvmf/common.sh@470 -- # waitforlisten 77976 00:24:02.941 21:37:23 -- common/autotest_common.sh@819 -- # '[' -z 77976 ']' 00:24:02.941 21:37:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:02.941 21:37:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.941 21:37:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:02.941 21:37:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.941 21:37:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:02.941 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:24:02.941 [2024-07-11 21:37:23.876695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:02.941 [2024-07-11 21:37:23.876817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.199 [2024-07-11 21:37:24.018603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.199 [2024-07-11 21:37:24.110392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:03.199 [2024-07-11 21:37:24.110582] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.199 [2024-07-11 21:37:24.110598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.199 [2024-07-11 21:37:24.110608] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.199 [2024-07-11 21:37:24.110641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.202 21:37:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:04.202 21:37:24 -- common/autotest_common.sh@852 -- # return 0 00:24:04.202 21:37:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:04.202 21:37:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:04.202 21:37:24 -- common/autotest_common.sh@10 -- # set +x 00:24:04.202 21:37:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.202 21:37:24 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:04.202 21:37:24 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:04.202 21:37:24 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:04.202 21:37:24 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:04.202 21:37:24 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:04.202 21:37:24 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:04.202 21:37:24 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:04.202 21:37:24 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.202 [2024-07-11 21:37:25.150867] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.461 [2024-07-11 21:37:25.166807] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.461 [2024-07-11 21:37:25.167033] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.461 malloc0 00:24:04.461 21:37:25 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.461 21:37:25 -- fips/fips.sh@148 -- # bdevperf_pid=78021 00:24:04.461 21:37:25 -- fips/fips.sh@149 -- # waitforlisten 78021 /var/tmp/bdevperf.sock 00:24:04.461 21:37:25 -- common/autotest_common.sh@819 -- # '[' -z 78021 ']' 00:24:04.461 21:37:25 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:04.461 21:37:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.461 21:37:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:04.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.461 21:37:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.461 21:37:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:04.461 21:37:25 -- common/autotest_common.sh@10 -- # set +x 00:24:04.461 [2024-07-11 21:37:25.299401] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:04.461 [2024-07-11 21:37:25.299555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78021 ] 00:24:04.719 [2024-07-11 21:37:25.439075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.719 [2024-07-11 21:37:25.540540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.282 21:37:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:05.282 21:37:26 -- common/autotest_common.sh@852 -- # return 0 00:24:05.282 21:37:26 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:05.847 [2024-07-11 21:37:26.506146] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.847 TLSTESTn1 00:24:05.847 21:37:26 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.847 Running I/O for 10 seconds... 00:24:15.836 00:24:15.836 Latency(us) 00:24:15.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.836 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:15.836 Verification LBA range: start 0x0 length 0x2000 00:24:15.836 TLSTESTn1 : 10.02 5658.80 22.10 0.00 0.00 22582.13 4855.62 26810.18 00:24:15.836 =================================================================================================================== 00:24:15.836 Total : 5658.80 22.10 0.00 0.00 22582.13 4855.62 26810.18 00:24:15.836 0 00:24:15.836 21:37:36 -- fips/fips.sh@1 -- # cleanup 00:24:15.836 21:37:36 -- fips/fips.sh@15 -- # process_shm --id 0 00:24:15.836 21:37:36 -- common/autotest_common.sh@796 -- # type=--id 00:24:15.836 21:37:36 -- common/autotest_common.sh@797 -- # id=0 00:24:15.836 21:37:36 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:24:15.836 21:37:36 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:15.836 21:37:36 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:24:15.836 21:37:36 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:24:15.836 21:37:36 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:24:15.836 21:37:36 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:15.836 nvmf_trace.0 00:24:16.094 21:37:36 -- common/autotest_common.sh@811 -- # return 0 00:24:16.094 21:37:36 -- fips/fips.sh@16 -- # killprocess 78021 00:24:16.094 21:37:36 -- common/autotest_common.sh@926 -- # '[' -z 78021 ']' 00:24:16.094 21:37:36 -- common/autotest_common.sh@930 -- # kill -0 78021 00:24:16.094 21:37:36 -- common/autotest_common.sh@931 -- # uname 00:24:16.094 21:37:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:16.094 21:37:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78021 00:24:16.094 21:37:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:16.094 21:37:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:16.094 21:37:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78021' 00:24:16.094 killing process with pid 78021 00:24:16.094 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.094 00:24:16.094 Latency(us) 00:24:16.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.094 =================================================================================================================== 00:24:16.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.094 21:37:36 -- common/autotest_common.sh@945 -- # kill 78021 00:24:16.094 21:37:36 -- common/autotest_common.sh@950 -- # wait 78021 00:24:16.353 21:37:37 -- fips/fips.sh@17 -- # nvmftestfini 00:24:16.353 21:37:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:16.353 21:37:37 -- nvmf/common.sh@116 -- # sync 00:24:16.353 21:37:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:16.353 21:37:37 -- nvmf/common.sh@119 -- # set +e 00:24:16.353 21:37:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:16.353 21:37:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:16.353 rmmod nvme_tcp 00:24:16.353 rmmod nvme_fabrics 00:24:16.353 rmmod nvme_keyring 00:24:16.353 21:37:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:16.353 21:37:37 -- nvmf/common.sh@123 -- # set -e 00:24:16.353 21:37:37 -- nvmf/common.sh@124 -- # return 0 00:24:16.353 21:37:37 -- nvmf/common.sh@477 -- # '[' -n 77976 ']' 00:24:16.353 21:37:37 -- nvmf/common.sh@478 -- # killprocess 77976 00:24:16.353 21:37:37 -- common/autotest_common.sh@926 -- # '[' -z 77976 ']' 00:24:16.353 21:37:37 -- common/autotest_common.sh@930 -- # kill -0 77976 00:24:16.353 21:37:37 -- common/autotest_common.sh@931 -- # uname 00:24:16.353 21:37:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:16.353 21:37:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77976 00:24:16.353 21:37:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:16.353 killing process with pid 77976 00:24:16.353 21:37:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:16.353 21:37:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77976' 00:24:16.353 21:37:37 -- common/autotest_common.sh@945 -- # kill 77976 00:24:16.353 21:37:37 -- common/autotest_common.sh@950 -- # wait 77976 00:24:16.612 21:37:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:16.613 21:37:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:16.613 21:37:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:16.613 21:37:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.613 21:37:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:16.613 21:37:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.613 21:37:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.613 21:37:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.613 21:37:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:16.613 21:37:37 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:16.613 00:24:16.613 real 0m14.281s 00:24:16.613 user 0m19.516s 00:24:16.613 sys 0m5.774s 00:24:16.613 21:37:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.613 21:37:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.613 ************************************ 00:24:16.613 END TEST nvmf_fips 00:24:16.613 ************************************ 00:24:16.613 21:37:37 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:24:16.613 21:37:37 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:16.613 21:37:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:16.613 21:37:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:16.613 21:37:37 -- common/autotest_common.sh@10 -- # set +x 00:24:16.613 ************************************ 00:24:16.613 START TEST nvmf_fuzz 00:24:16.613 ************************************ 00:24:16.613 21:37:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:16.872 * Looking for test storage... 00:24:16.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:16.872 21:37:37 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:16.872 21:37:37 -- nvmf/common.sh@7 -- # uname -s 00:24:16.872 21:37:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.872 21:37:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.872 21:37:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.872 21:37:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.872 21:37:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.872 21:37:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.872 21:37:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.872 21:37:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.872 21:37:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.872 21:37:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.872 21:37:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:24:16.872 21:37:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:24:16.872 21:37:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.872 21:37:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.872 21:37:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:16.872 21:37:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:16.872 21:37:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.872 21:37:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.872 21:37:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.872 21:37:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.872 21:37:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.872 21:37:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.872 21:37:37 -- paths/export.sh@5 -- # export PATH 00:24:16.872 21:37:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.872 21:37:37 -- nvmf/common.sh@46 -- # : 0 00:24:16.872 21:37:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:16.872 21:37:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:16.872 21:37:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:16.872 21:37:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.872 21:37:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.872 21:37:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:16.872 21:37:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:16.872 21:37:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:16.872 21:37:37 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:16.872 21:37:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:16.872 21:37:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.872 21:37:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:16.872 21:37:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:16.872 21:37:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:16.872 21:37:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.872 21:37:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.872 21:37:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.872 21:37:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:16.872 21:37:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:16.872 21:37:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:16.872 21:37:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:16.872 21:37:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:16.872 21:37:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:16.872 21:37:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.872 21:37:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.872 21:37:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:16.872 21:37:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:16.872 21:37:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:16.872 21:37:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:16.872 21:37:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:16.872 21:37:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.872 21:37:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:16.872 21:37:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:16.872 21:37:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:16.872 21:37:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:16.872 21:37:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:16.872 21:37:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:16.872 Cannot find device "nvmf_tgt_br" 00:24:16.872 21:37:37 -- nvmf/common.sh@154 -- # true 00:24:16.872 21:37:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:16.872 Cannot find device "nvmf_tgt_br2" 00:24:16.872 21:37:37 -- nvmf/common.sh@155 -- # true 00:24:16.872 21:37:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:16.872 21:37:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:16.872 Cannot find device "nvmf_tgt_br" 00:24:16.872 21:37:37 -- nvmf/common.sh@157 -- # true 00:24:16.872 21:37:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:16.872 Cannot find device "nvmf_tgt_br2" 00:24:16.872 21:37:37 -- nvmf/common.sh@158 -- # true 00:24:16.872 21:37:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:16.872 21:37:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:16.872 21:37:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:16.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.872 21:37:37 -- nvmf/common.sh@161 -- # true 00:24:16.872 21:37:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:16.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.872 21:37:37 -- nvmf/common.sh@162 -- # true 00:24:16.872 21:37:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:16.872 21:37:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:16.872 21:37:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:16.872 21:37:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:16.872 21:37:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:16.872 21:37:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:16.872 21:37:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:16.872 21:37:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:16.872 21:37:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:17.129 21:37:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:17.129 21:37:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:17.129 21:37:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:17.129 21:37:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:17.129 21:37:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:17.129 21:37:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:17.129 21:37:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:17.130 21:37:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:17.130 21:37:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:17.130 21:37:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:17.130 21:37:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:17.130 21:37:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:17.130 21:37:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:17.130 21:37:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:17.130 21:37:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:17.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:24:17.130 00:24:17.130 --- 10.0.0.2 ping statistics --- 00:24:17.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.130 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:17.130 21:37:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:17.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:17.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:24:17.130 00:24:17.130 --- 10.0.0.3 ping statistics --- 00:24:17.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.130 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:17.130 21:37:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:17.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:24:17.130 00:24:17.130 --- 10.0.0.1 ping statistics --- 00:24:17.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.130 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:17.130 21:37:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.130 21:37:37 -- nvmf/common.sh@421 -- # return 0 00:24:17.130 21:37:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:17.130 21:37:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.130 21:37:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:17.130 21:37:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:17.130 21:37:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.130 21:37:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:17.130 21:37:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:17.130 21:37:37 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78339 00:24:17.130 21:37:37 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:17.130 21:37:37 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:17.130 21:37:37 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78339 00:24:17.130 21:37:37 -- common/autotest_common.sh@819 -- # '[' -z 78339 ']' 00:24:17.130 21:37:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.130 21:37:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:17.130 21:37:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.130 21:37:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:17.130 21:37:37 -- common/autotest_common.sh@10 -- # set +x 00:24:18.068 21:37:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:18.068 21:37:38 -- common/autotest_common.sh@852 -- # return 0 00:24:18.068 21:37:38 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.068 21:37:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.068 21:37:38 -- common/autotest_common.sh@10 -- # set +x 00:24:18.068 21:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.068 21:37:39 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:18.068 21:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.068 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:24:18.326 Malloc0 00:24:18.326 21:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.326 21:37:39 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.326 21:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.326 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:24:18.326 21:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.326 21:37:39 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.326 21:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.326 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:24:18.326 21:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.326 21:37:39 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.326 21:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.326 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:24:18.326 21:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.326 21:37:39 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:18.326 21:37:39 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:18.584 Shutting down the fuzz application 00:24:18.584 21:37:39 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:18.842 Shutting down the fuzz application 00:24:18.842 21:37:39 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.842 21:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.842 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:24:18.842 21:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.842 21:37:39 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:18.842 21:37:39 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:18.842 21:37:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:18.842 21:37:39 -- nvmf/common.sh@116 -- # sync 00:24:19.100 21:37:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:19.100 21:37:39 -- nvmf/common.sh@119 -- # set +e 00:24:19.100 21:37:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:19.100 21:37:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:19.100 rmmod nvme_tcp 00:24:19.100 rmmod nvme_fabrics 00:24:19.100 rmmod nvme_keyring 00:24:19.100 21:37:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:19.100 21:37:39 -- nvmf/common.sh@123 -- # set -e 00:24:19.100 21:37:39 -- nvmf/common.sh@124 -- # return 0 00:24:19.100 21:37:39 -- nvmf/common.sh@477 -- # '[' -n 78339 ']' 00:24:19.100 21:37:39 -- nvmf/common.sh@478 -- # killprocess 78339 00:24:19.100 21:37:39 -- common/autotest_common.sh@926 -- # '[' -z 78339 ']' 00:24:19.100 21:37:39 -- common/autotest_common.sh@930 -- # kill -0 78339 00:24:19.100 21:37:39 -- common/autotest_common.sh@931 -- # uname 00:24:19.100 21:37:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:19.100 21:37:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78339 00:24:19.100 21:37:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:19.100 21:37:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:19.100 killing process with pid 78339 00:24:19.100 21:37:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78339' 00:24:19.100 21:37:39 -- common/autotest_common.sh@945 -- # kill 78339 00:24:19.100 21:37:39 -- common/autotest_common.sh@950 -- # wait 78339 00:24:19.358 21:37:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:19.358 21:37:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:19.358 21:37:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:19.358 21:37:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.358 21:37:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:19.358 21:37:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.358 21:37:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.358 21:37:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.358 21:37:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:19.358 21:37:40 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:24:19.358 00:24:19.358 real 0m2.708s 00:24:19.358 user 0m2.838s 00:24:19.358 sys 0m0.656s 00:24:19.358 21:37:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.358 ************************************ 00:24:19.358 END TEST nvmf_fuzz 00:24:19.358 ************************************ 00:24:19.358 21:37:40 -- common/autotest_common.sh@10 -- # set +x 00:24:19.358 21:37:40 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:19.358 21:37:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:19.358 21:37:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:19.358 21:37:40 -- common/autotest_common.sh@10 -- # set +x 00:24:19.358 ************************************ 00:24:19.358 START TEST nvmf_multiconnection 00:24:19.358 ************************************ 00:24:19.358 21:37:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:19.616 * Looking for test storage... 00:24:19.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:19.616 21:37:40 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:19.616 21:37:40 -- nvmf/common.sh@7 -- # uname -s 00:24:19.616 21:37:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.616 21:37:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.616 21:37:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.616 21:37:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.616 21:37:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.616 21:37:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.616 21:37:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.616 21:37:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.616 21:37:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.616 21:37:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.616 21:37:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:24:19.616 21:37:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:24:19.616 21:37:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.616 21:37:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.616 21:37:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:19.616 21:37:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:19.616 21:37:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.616 21:37:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.616 21:37:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.616 21:37:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.616 21:37:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.617 21:37:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.617 21:37:40 -- paths/export.sh@5 -- # export PATH 00:24:19.617 21:37:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.617 21:37:40 -- nvmf/common.sh@46 -- # : 0 00:24:19.617 21:37:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:19.617 21:37:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:19.617 21:37:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:19.617 21:37:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.617 21:37:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.617 21:37:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:19.617 21:37:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:19.617 21:37:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:19.617 21:37:40 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.617 21:37:40 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.617 21:37:40 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:19.617 21:37:40 -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:19.617 21:37:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:19.617 21:37:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.617 21:37:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:19.617 21:37:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:19.617 21:37:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:19.617 21:37:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.617 21:37:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.617 21:37:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.617 21:37:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:19.617 21:37:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:19.617 21:37:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:19.617 21:37:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:19.617 21:37:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:19.617 21:37:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:19.617 21:37:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.617 21:37:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.617 21:37:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:19.617 21:37:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:19.617 21:37:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:19.617 21:37:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:19.617 21:37:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:19.617 21:37:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.617 21:37:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:19.617 21:37:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:19.617 21:37:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:19.617 21:37:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:19.617 21:37:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:19.617 21:37:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:19.617 Cannot find device "nvmf_tgt_br" 00:24:19.617 21:37:40 -- nvmf/common.sh@154 -- # true 00:24:19.617 21:37:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:19.617 Cannot find device "nvmf_tgt_br2" 00:24:19.617 21:37:40 -- nvmf/common.sh@155 -- # true 00:24:19.617 21:37:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:19.617 21:37:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:19.617 Cannot find device "nvmf_tgt_br" 00:24:19.617 21:37:40 -- nvmf/common.sh@157 -- # true 00:24:19.617 21:37:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:19.617 Cannot find device "nvmf_tgt_br2" 00:24:19.617 21:37:40 -- nvmf/common.sh@158 -- # true 00:24:19.617 21:37:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:19.617 21:37:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:19.617 21:37:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:19.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.617 21:37:40 -- nvmf/common.sh@161 -- # true 00:24:19.617 21:37:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:19.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.617 21:37:40 -- nvmf/common.sh@162 -- # true 00:24:19.617 21:37:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:19.617 21:37:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:19.617 21:37:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:19.617 21:37:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:19.617 21:37:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:19.875 21:37:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:19.875 21:37:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:19.875 21:37:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:19.875 21:37:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:19.875 21:37:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:19.875 21:37:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:19.875 21:37:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:19.875 21:37:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:19.875 21:37:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:19.875 21:37:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:19.875 21:37:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:19.875 21:37:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:19.875 21:37:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:19.875 21:37:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:19.876 21:37:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:19.876 21:37:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:19.876 21:37:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:19.876 21:37:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:19.876 21:37:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:19.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:24:19.876 00:24:19.876 --- 10.0.0.2 ping statistics --- 00:24:19.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.876 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:24:19.876 21:37:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:19.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:19.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:19.876 00:24:19.876 --- 10.0.0.3 ping statistics --- 00:24:19.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.876 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:19.876 21:37:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:19.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:19.876 00:24:19.876 --- 10.0.0.1 ping statistics --- 00:24:19.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.876 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:19.876 21:37:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.876 21:37:40 -- nvmf/common.sh@421 -- # return 0 00:24:19.876 21:37:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:19.876 21:37:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.876 21:37:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:19.876 21:37:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:19.876 21:37:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.876 21:37:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:19.876 21:37:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:19.876 21:37:40 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:19.876 21:37:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:19.876 21:37:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:19.876 21:37:40 -- common/autotest_common.sh@10 -- # set +x 00:24:19.876 21:37:40 -- nvmf/common.sh@469 -- # nvmfpid=78526 00:24:19.876 21:37:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.876 21:37:40 -- nvmf/common.sh@470 -- # waitforlisten 78526 00:24:19.876 21:37:40 -- common/autotest_common.sh@819 -- # '[' -z 78526 ']' 00:24:19.876 21:37:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.876 21:37:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:19.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.876 21:37:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.876 21:37:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:19.876 21:37:40 -- common/autotest_common.sh@10 -- # set +x 00:24:19.876 [2024-07-11 21:37:40.803083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:19.876 [2024-07-11 21:37:40.803180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.135 [2024-07-11 21:37:40.954938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.135 [2024-07-11 21:37:41.050051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:20.135 [2024-07-11 21:37:41.050205] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.135 [2024-07-11 21:37:41.050218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.135 [2024-07-11 21:37:41.050227] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.135 [2024-07-11 21:37:41.050331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.135 [2024-07-11 21:37:41.051448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.135 [2024-07-11 21:37:41.051540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.135 [2024-07-11 21:37:41.051549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.069 21:37:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:21.069 21:37:41 -- common/autotest_common.sh@852 -- # return 0 00:24:21.069 21:37:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:21.069 21:37:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:21.069 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.069 21:37:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.069 21:37:41 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.069 21:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.069 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.069 [2024-07-11 21:37:41.891421] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.069 21:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.069 21:37:41 -- target/multiconnection.sh@21 -- # seq 1 11 00:24:21.069 21:37:41 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.069 21:37:41 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:21.069 21:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.069 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.069 Malloc1 00:24:21.069 21:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.069 21:37:41 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:21.069 21:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.069 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.069 21:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.069 21:37:41 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:21.069 21:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.069 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.069 21:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.069 21:37:41 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.070 21:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.070 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.070 [2024-07-11 21:37:41.964717] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.070 21:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.070 21:37:41 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.070 21:37:41 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:21.070 21:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.070 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.070 Malloc2 00:24:21.070 21:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.070 21:37:41 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:21.070 21:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.070 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:24:21.070 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.070 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:21.070 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.070 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.070 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.070 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:21.070 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.070 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.328 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 Malloc3 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.328 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 Malloc4 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.328 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 Malloc5 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:21.328 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.328 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.328 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.328 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.328 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 Malloc6 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.329 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 Malloc7 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.329 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.329 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.329 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:21.329 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.329 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 Malloc8 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.587 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 Malloc9 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.587 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 Malloc10 00:24:21.587 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.587 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:21.587 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.587 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.588 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.588 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:21.588 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.588 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.588 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.588 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:21.588 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.588 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.588 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.588 21:37:42 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.588 21:37:42 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:21.588 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.588 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.588 Malloc11 00:24:21.588 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.588 21:37:42 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:21.588 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.588 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.588 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.588 21:37:42 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:21.588 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.588 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.588 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.588 21:37:42 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:21.588 21:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.588 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:24:21.588 21:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.588 21:37:42 -- target/multiconnection.sh@28 -- # seq 1 11 00:24:21.588 21:37:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.588 21:37:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:21.846 21:37:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:21.846 21:37:42 -- common/autotest_common.sh@1177 -- # local i=0 00:24:21.846 21:37:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.846 21:37:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:21.846 21:37:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:23.743 21:37:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:23.743 21:37:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:23.743 21:37:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:24:23.743 21:37:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:23.743 21:37:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.743 21:37:44 -- common/autotest_common.sh@1187 -- # return 0 00:24:23.743 21:37:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.743 21:37:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:24.001 21:37:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:24.001 21:37:44 -- common/autotest_common.sh@1177 -- # local i=0 00:24:24.001 21:37:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.001 21:37:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:24.001 21:37:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:25.903 21:37:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:25.903 21:37:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:25.903 21:37:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:24:25.903 21:37:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:25.903 21:37:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.903 21:37:46 -- common/autotest_common.sh@1187 -- # return 0 00:24:25.903 21:37:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.903 21:37:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:26.161 21:37:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:26.161 21:37:46 -- common/autotest_common.sh@1177 -- # local i=0 00:24:26.161 21:37:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.161 21:37:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:26.161 21:37:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:28.082 21:37:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:28.082 21:37:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:28.082 21:37:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:24:28.082 21:37:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:28.082 21:37:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.082 21:37:48 -- common/autotest_common.sh@1187 -- # return 0 00:24:28.082 21:37:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.082 21:37:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:28.340 21:37:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:28.340 21:37:49 -- common/autotest_common.sh@1177 -- # local i=0 00:24:28.340 21:37:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:28.340 21:37:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:28.340 21:37:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:30.238 21:37:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:30.238 21:37:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:30.238 21:37:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:24:30.238 21:37:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:30.238 21:37:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:30.238 21:37:51 -- common/autotest_common.sh@1187 -- # return 0 00:24:30.238 21:37:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.238 21:37:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:30.495 21:37:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:30.495 21:37:51 -- common/autotest_common.sh@1177 -- # local i=0 00:24:30.495 21:37:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.495 21:37:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:30.495 21:37:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:32.419 21:37:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:32.419 21:37:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:32.419 21:37:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:24:32.419 21:37:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:32.419 21:37:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.419 21:37:53 -- common/autotest_common.sh@1187 -- # return 0 00:24:32.419 21:37:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.419 21:37:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:32.419 21:37:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:32.678 21:37:53 -- common/autotest_common.sh@1177 -- # local i=0 00:24:32.678 21:37:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.678 21:37:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:32.678 21:37:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:34.575 21:37:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:34.575 21:37:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:34.575 21:37:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:24:34.575 21:37:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:34.575 21:37:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.575 21:37:55 -- common/autotest_common.sh@1187 -- # return 0 00:24:34.575 21:37:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.575 21:37:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:34.833 21:37:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:34.833 21:37:55 -- common/autotest_common.sh@1177 -- # local i=0 00:24:34.833 21:37:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.833 21:37:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:34.833 21:37:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:36.730 21:37:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:36.730 21:37:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:36.730 21:37:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:24:36.730 21:37:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:36.730 21:37:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.730 21:37:57 -- common/autotest_common.sh@1187 -- # return 0 00:24:36.730 21:37:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.730 21:37:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:36.988 21:37:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:36.988 21:37:57 -- common/autotest_common.sh@1177 -- # local i=0 00:24:36.988 21:37:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.988 21:37:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:36.988 21:37:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:38.887 21:37:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:38.887 21:37:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:38.887 21:37:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:24:38.887 21:37:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:38.887 21:37:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.887 21:37:59 -- common/autotest_common.sh@1187 -- # return 0 00:24:38.887 21:37:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.887 21:37:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:39.145 21:37:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:39.145 21:37:59 -- common/autotest_common.sh@1177 -- # local i=0 00:24:39.145 21:37:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.145 21:37:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:39.145 21:37:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:41.044 21:38:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:41.044 21:38:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:41.044 21:38:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:41.044 21:38:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:41.044 21:38:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.044 21:38:01 -- common/autotest_common.sh@1187 -- # return 0 00:24:41.044 21:38:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.044 21:38:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:41.301 21:38:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:41.301 21:38:02 -- common/autotest_common.sh@1177 -- # local i=0 00:24:41.301 21:38:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.301 21:38:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:41.301 21:38:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:43.198 21:38:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:43.198 21:38:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:43.198 21:38:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:43.198 21:38:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:43.198 21:38:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.198 21:38:04 -- common/autotest_common.sh@1187 -- # return 0 00:24:43.198 21:38:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.198 21:38:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:43.456 21:38:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:43.456 21:38:04 -- common/autotest_common.sh@1177 -- # local i=0 00:24:43.456 21:38:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.456 21:38:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:43.456 21:38:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:45.379 21:38:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:45.379 21:38:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:45.379 21:38:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:45.379 21:38:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:45.379 21:38:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.379 21:38:06 -- common/autotest_common.sh@1187 -- # return 0 00:24:45.379 21:38:06 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:45.379 [global] 00:24:45.379 thread=1 00:24:45.379 invalidate=1 00:24:45.379 rw=read 00:24:45.379 time_based=1 00:24:45.379 runtime=10 00:24:45.379 ioengine=libaio 00:24:45.379 direct=1 00:24:45.379 bs=262144 00:24:45.379 iodepth=64 00:24:45.379 norandommap=1 00:24:45.379 numjobs=1 00:24:45.379 00:24:45.379 [job0] 00:24:45.379 filename=/dev/nvme0n1 00:24:45.379 [job1] 00:24:45.379 filename=/dev/nvme10n1 00:24:45.379 [job2] 00:24:45.379 filename=/dev/nvme1n1 00:24:45.379 [job3] 00:24:45.379 filename=/dev/nvme2n1 00:24:45.379 [job4] 00:24:45.379 filename=/dev/nvme3n1 00:24:45.379 [job5] 00:24:45.379 filename=/dev/nvme4n1 00:24:45.379 [job6] 00:24:45.379 filename=/dev/nvme5n1 00:24:45.379 [job7] 00:24:45.379 filename=/dev/nvme6n1 00:24:45.379 [job8] 00:24:45.379 filename=/dev/nvme7n1 00:24:45.379 [job9] 00:24:45.379 filename=/dev/nvme8n1 00:24:45.379 [job10] 00:24:45.379 filename=/dev/nvme9n1 00:24:45.658 Could not set queue depth (nvme0n1) 00:24:45.658 Could not set queue depth (nvme10n1) 00:24:45.658 Could not set queue depth (nvme1n1) 00:24:45.658 Could not set queue depth (nvme2n1) 00:24:45.658 Could not set queue depth (nvme3n1) 00:24:45.658 Could not set queue depth (nvme4n1) 00:24:45.658 Could not set queue depth (nvme5n1) 00:24:45.658 Could not set queue depth (nvme6n1) 00:24:45.658 Could not set queue depth (nvme7n1) 00:24:45.658 Could not set queue depth (nvme8n1) 00:24:45.658 Could not set queue depth (nvme9n1) 00:24:45.658 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.658 fio-3.35 00:24:45.658 Starting 11 threads 00:24:57.852 00:24:57.852 job0: (groupid=0, jobs=1): err= 0: pid=78984: Thu Jul 11 21:38:16 2024 00:24:57.852 read: IOPS=496, BW=124MiB/s (130MB/s)(1251MiB/10086msec) 00:24:57.852 slat (usec): min=19, max=87346, avg=1996.00, stdev=4781.44 00:24:57.852 clat (msec): min=24, max=208, avg=126.82, stdev=14.23 00:24:57.852 lat (msec): min=24, max=209, avg=128.82, stdev=14.37 00:24:57.852 clat percentiles (msec): 00:24:57.852 | 1.00th=[ 78], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 117], 00:24:57.852 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 131], 00:24:57.852 | 70.00th=[ 134], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:24:57.852 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 197], 99.95th=[ 197], 00:24:57.852 | 99.99th=[ 209] 00:24:57.852 bw ( KiB/s): min=116736, max=151040, per=7.71%, avg=126501.75, stdev=8394.50, samples=20 00:24:57.852 iops : min= 456, max= 590, avg=493.95, stdev=32.87, samples=20 00:24:57.852 lat (msec) : 50=0.02%, 100=3.10%, 250=96.88% 00:24:57.852 cpu : usr=0.35%, sys=2.05%, ctx=1169, majf=0, minf=4097 00:24:57.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:57.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.852 issued rwts: total=5003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.852 job1: (groupid=0, jobs=1): err= 0: pid=78985: Thu Jul 11 21:38:16 2024 00:24:57.852 read: IOPS=338, BW=84.7MiB/s (88.9MB/s)(857MiB/10107msec) 00:24:57.852 slat (usec): min=19, max=61726, avg=2877.68, stdev=6715.75 00:24:57.853 clat (msec): min=38, max=245, avg=185.56, stdev=21.79 00:24:57.853 lat (msec): min=40, max=245, avg=188.44, stdev=22.41 00:24:57.853 clat percentiles (msec): 00:24:57.853 | 1.00th=[ 127], 5.00th=[ 142], 10.00th=[ 153], 20.00th=[ 171], 00:24:57.853 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:24:57.853 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 209], 95.00th=[ 215], 00:24:57.853 | 99.00th=[ 230], 99.50th=[ 232], 99.90th=[ 239], 99.95th=[ 239], 00:24:57.853 | 99.99th=[ 247] 00:24:57.853 bw ( KiB/s): min=76288, max=112128, per=5.25%, avg=86049.95, stdev=7978.61, samples=20 00:24:57.853 iops : min= 298, max= 438, avg=336.05, stdev=31.18, samples=20 00:24:57.853 lat (msec) : 50=0.03%, 100=0.03%, 250=99.94% 00:24:57.853 cpu : usr=0.14%, sys=1.43%, ctx=824, majf=0, minf=4097 00:24:57.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:57.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.853 issued rwts: total=3426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.853 job2: (groupid=0, jobs=1): err= 0: pid=78986: Thu Jul 11 21:38:16 2024 00:24:57.853 read: IOPS=493, BW=123MiB/s (129MB/s)(1244MiB/10086msec) 00:24:57.853 slat (usec): min=17, max=94892, avg=2009.50, stdev=4916.01 00:24:57.853 clat (msec): min=48, max=188, avg=127.50, stdev=14.16 00:24:57.853 lat (msec): min=52, max=224, avg=129.51, stdev=14.21 00:24:57.853 clat percentiles (msec): 00:24:57.853 | 1.00th=[ 97], 5.00th=[ 107], 10.00th=[ 111], 20.00th=[ 117], 00:24:57.853 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:24:57.853 | 70.00th=[ 134], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 150], 00:24:57.853 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 188], 00:24:57.853 | 99.99th=[ 188] 00:24:57.853 bw ( KiB/s): min=104657, max=147456, per=7.67%, avg=125692.15, stdev=9949.03, samples=20 00:24:57.853 iops : min= 408, max= 576, avg=490.85, stdev=38.83, samples=20 00:24:57.853 lat (msec) : 50=0.02%, 100=1.75%, 250=98.23% 00:24:57.853 cpu : usr=0.20%, sys=2.12%, ctx=1060, majf=0, minf=4097 00:24:57.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:57.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.853 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.853 job3: (groupid=0, jobs=1): err= 0: pid=78987: Thu Jul 11 21:38:16 2024 00:24:57.853 read: IOPS=343, BW=85.8MiB/s (90.0MB/s)(868MiB/10108msec) 00:24:57.853 slat (usec): min=20, max=87081, avg=2875.77, stdev=6848.74 00:24:57.853 clat (msec): min=19, max=249, avg=183.22, stdev=24.39 00:24:57.853 lat (msec): min=19, max=249, avg=186.09, stdev=25.02 00:24:57.853 clat percentiles (msec): 00:24:57.853 | 1.00th=[ 94], 5.00th=[ 138], 10.00th=[ 150], 20.00th=[ 169], 00:24:57.853 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:24:57.853 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 205], 95.00th=[ 211], 00:24:57.853 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 239], 99.95th=[ 241], 00:24:57.853 | 99.99th=[ 251] 00:24:57.853 bw ( KiB/s): min=76288, max=114404, per=5.32%, avg=87247.15, stdev=9219.38, samples=20 00:24:57.853 iops : min= 298, max= 446, avg=340.70, stdev=35.82, samples=20 00:24:57.853 lat (msec) : 20=0.09%, 100=1.82%, 250=98.10% 00:24:57.853 cpu : usr=0.21%, sys=1.77%, ctx=823, majf=0, minf=4097 00:24:57.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:57.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.853 issued rwts: total=3471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.853 job4: (groupid=0, jobs=1): err= 0: pid=78988: Thu Jul 11 21:38:16 2024 00:24:57.853 read: IOPS=495, BW=124MiB/s (130MB/s)(1249MiB/10089msec) 00:24:57.853 slat (usec): min=18, max=45503, avg=2002.21, stdev=4704.16 00:24:57.853 clat (msec): min=21, max=187, avg=127.03, stdev=12.59 00:24:57.853 lat (msec): min=22, max=187, avg=129.03, stdev=12.67 00:24:57.853 clat percentiles (msec): 00:24:57.853 | 1.00th=[ 95], 5.00th=[ 107], 10.00th=[ 112], 20.00th=[ 118], 00:24:57.853 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:24:57.853 | 70.00th=[ 133], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:24:57.853 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 184], 00:24:57.853 | 99.99th=[ 188] 00:24:57.853 bw ( KiB/s): min=105261, max=147456, per=7.70%, avg=126300.75, stdev=9150.90, samples=20 00:24:57.853 iops : min= 411, max= 576, avg=493.20, stdev=35.77, samples=20 00:24:57.853 lat (msec) : 50=0.14%, 100=2.20%, 250=97.66% 00:24:57.853 cpu : usr=0.26%, sys=2.08%, ctx=1130, majf=0, minf=4097 00:24:57.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:57.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.853 issued rwts: total=4996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.853 job5: (groupid=0, jobs=1): err= 0: pid=78989: Thu Jul 11 21:38:16 2024 00:24:57.853 read: IOPS=493, BW=123MiB/s (129MB/s)(1245MiB/10092msec) 00:24:57.853 slat (usec): min=18, max=38043, avg=1978.88, stdev=4569.54 00:24:57.853 clat (msec): min=17, max=200, avg=127.54, stdev=15.36 00:24:57.853 lat (msec): min=17, max=200, avg=129.52, stdev=15.40 00:24:57.853 clat percentiles (msec): 00:24:57.853 | 1.00th=[ 91], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 116], 00:24:57.853 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 132], 00:24:57.853 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 150], 00:24:57.853 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 201], 00:24:57.853 | 99.99th=[ 201] 00:24:57.853 bw ( KiB/s): min=112640, max=148480, per=7.67%, avg=125813.85, stdev=8829.50, samples=20 00:24:57.853 iops : min= 440, max= 580, avg=491.45, stdev=34.48, samples=20 00:24:57.853 lat (msec) : 20=0.04%, 50=0.22%, 100=2.27%, 250=97.47% 00:24:57.853 cpu : usr=0.29%, sys=2.25%, ctx=1075, majf=0, minf=4097 00:24:57.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:57.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.853 issued rwts: total=4978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.853 job6: (groupid=0, jobs=1): err= 0: pid=78990: Thu Jul 11 21:38:16 2024 00:24:57.853 read: IOPS=1319, BW=330MiB/s (346MB/s)(3303MiB/10011msec) 00:24:57.853 slat (usec): min=18, max=27374, avg=752.60, stdev=1856.45 00:24:57.853 clat (usec): min=8850, max=94750, avg=47640.68, stdev=13944.46 00:24:57.853 lat (usec): min=13308, max=94787, avg=48393.28, stdev=14131.20 00:24:57.853 clat percentiles (usec): 00:24:57.853 | 1.00th=[30016], 5.00th=[31851], 10.00th=[32900], 20.00th=[33817], 00:24:57.853 | 30.00th=[34866], 40.00th=[35914], 50.00th=[49021], 60.00th=[56886], 00:24:57.853 | 70.00th=[59507], 80.00th=[61604], 90.00th=[64226], 95.00th=[66847], 00:24:57.853 | 99.00th=[74974], 99.50th=[80217], 99.90th=[88605], 99.95th=[90702], 00:24:57.853 | 99.99th=[94897] 00:24:57.853 bw ( KiB/s): min=205824, max=478720, per=20.52%, avg=336513.70, stdev=100746.55, samples=20 00:24:57.853 iops : min= 804, max= 1870, avg=1314.45, stdev=393.54, samples=20 00:24:57.853 lat (msec) : 10=0.01%, 20=0.10%, 50=50.20%, 100=49.70% 00:24:57.853 cpu : usr=0.65%, sys=4.89%, ctx=2505, majf=0, minf=4097 00:24:57.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:57.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.854 issued rwts: total=13210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.854 job7: (groupid=0, jobs=1): err= 0: pid=78991: Thu Jul 11 21:38:16 2024 00:24:57.854 read: IOPS=344, BW=86.1MiB/s (90.3MB/s)(871MiB/10110msec) 00:24:57.854 slat (usec): min=18, max=61797, avg=2866.56, stdev=6642.67 00:24:57.854 clat (msec): min=49, max=253, avg=182.50, stdev=26.38 00:24:57.854 lat (msec): min=49, max=253, avg=185.37, stdev=27.04 00:24:57.854 clat percentiles (msec): 00:24:57.854 | 1.00th=[ 71], 5.00th=[ 134], 10.00th=[ 146], 20.00th=[ 165], 00:24:57.854 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:24:57.854 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 205], 95.00th=[ 211], 00:24:57.854 | 99.00th=[ 220], 99.50th=[ 228], 99.90th=[ 241], 99.95th=[ 253], 00:24:57.854 | 99.99th=[ 253] 00:24:57.854 bw ( KiB/s): min=78336, max=116456, per=5.34%, avg=87555.60, stdev=9864.51, samples=20 00:24:57.854 iops : min= 306, max= 454, avg=341.90, stdev=38.41, samples=20 00:24:57.854 lat (msec) : 50=0.09%, 100=1.67%, 250=98.19%, 500=0.06% 00:24:57.854 cpu : usr=0.17%, sys=1.55%, ctx=896, majf=0, minf=4097 00:24:57.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:57.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.854 issued rwts: total=3483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.854 job8: (groupid=0, jobs=1): err= 0: pid=78992: Thu Jul 11 21:38:16 2024 00:24:57.854 read: IOPS=343, BW=85.8MiB/s (90.0MB/s)(868MiB/10114msec) 00:24:57.854 slat (usec): min=20, max=65927, avg=2880.91, stdev=6717.99 00:24:57.854 clat (msec): min=31, max=242, avg=183.18, stdev=23.75 00:24:57.854 lat (msec): min=31, max=255, avg=186.06, stdev=24.41 00:24:57.854 clat percentiles (msec): 00:24:57.854 | 1.00th=[ 123], 5.00th=[ 136], 10.00th=[ 146], 20.00th=[ 165], 00:24:57.854 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:24:57.854 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 205], 95.00th=[ 211], 00:24:57.854 | 99.00th=[ 224], 99.50th=[ 228], 99.90th=[ 243], 99.95th=[ 243], 00:24:57.854 | 99.99th=[ 243] 00:24:57.854 bw ( KiB/s): min=77824, max=119535, per=5.32%, avg=87248.85, stdev=9702.41, samples=20 00:24:57.854 iops : min= 304, max= 466, avg=340.70, stdev=37.67, samples=20 00:24:57.854 lat (msec) : 50=0.09%, 100=0.29%, 250=99.63% 00:24:57.854 cpu : usr=0.15%, sys=1.53%, ctx=830, majf=0, minf=4097 00:24:57.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:24:57.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.854 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.854 job9: (groupid=0, jobs=1): err= 0: pid=78994: Thu Jul 11 21:38:16 2024 00:24:57.854 read: IOPS=1384, BW=346MiB/s (363MB/s)(3466MiB/10016msec) 00:24:57.854 slat (usec): min=18, max=32641, avg=717.31, stdev=1800.20 00:24:57.854 clat (usec): min=10622, max=79928, avg=45459.17, stdev=12966.45 00:24:57.854 lat (usec): min=13708, max=79976, avg=46176.48, stdev=13131.83 00:24:57.854 clat percentiles (usec): 00:24:57.854 | 1.00th=[28705], 5.00th=[31851], 10.00th=[32900], 20.00th=[34341], 00:24:57.854 | 30.00th=[34866], 40.00th=[35914], 50.00th=[37487], 60.00th=[52167], 00:24:57.854 | 70.00th=[57934], 80.00th=[60556], 90.00th=[63177], 95.00th=[65274], 00:24:57.854 | 99.00th=[68682], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:24:57.854 | 99.99th=[78119] 00:24:57.854 bw ( KiB/s): min=254464, max=468480, per=21.55%, avg=353305.35, stdev=96705.63, samples=20 00:24:57.854 iops : min= 994, max= 1830, avg=1380.00, stdev=377.77, samples=20 00:24:57.854 lat (msec) : 20=0.30%, 50=58.58%, 100=41.12% 00:24:57.854 cpu : usr=0.48%, sys=4.72%, ctx=2684, majf=0, minf=4097 00:24:57.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:57.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.854 issued rwts: total=13864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.854 job10: (groupid=0, jobs=1): err= 0: pid=78999: Thu Jul 11 21:38:16 2024 00:24:57.854 read: IOPS=386, BW=96.6MiB/s (101MB/s)(977MiB/10109msec) 00:24:57.854 slat (usec): min=19, max=143749, avg=2548.01, stdev=6553.67 00:24:57.854 clat (msec): min=32, max=252, avg=162.66, stdev=55.02 00:24:57.854 lat (msec): min=33, max=320, avg=165.21, stdev=55.91 00:24:57.854 clat percentiles (msec): 00:24:57.854 | 1.00th=[ 43], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 79], 00:24:57.854 | 30.00th=[ 163], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:24:57.854 | 70.00th=[ 197], 80.00th=[ 201], 90.00th=[ 209], 95.00th=[ 218], 00:24:57.854 | 99.00th=[ 245], 99.50th=[ 245], 99.90th=[ 249], 99.95th=[ 249], 00:24:57.854 | 99.99th=[ 253] 00:24:57.854 bw ( KiB/s): min=69632, max=231473, per=6.00%, avg=98381.85, stdev=43387.29, samples=20 00:24:57.854 iops : min= 272, max= 904, avg=384.20, stdev=169.35, samples=20 00:24:57.854 lat (msec) : 50=1.66%, 100=20.58%, 250=77.73%, 500=0.03% 00:24:57.854 cpu : usr=0.18%, sys=1.67%, ctx=933, majf=0, minf=4097 00:24:57.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:57.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:57.854 issued rwts: total=3906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:57.854 00:24:57.854 Run status group 0 (all jobs): 00:24:57.854 READ: bw=1601MiB/s (1679MB/s), 84.7MiB/s-346MiB/s (88.9MB/s-363MB/s), io=15.8GiB (17.0GB), run=10011-10114msec 00:24:57.854 00:24:57.854 Disk stats (read/write): 00:24:57.854 nvme0n1: ios=9878/0, merge=0/0, ticks=1234802/0, in_queue=1234802, util=97.66% 00:24:57.854 nvme10n1: ios=6725/0, merge=0/0, ticks=1227808/0, in_queue=1227808, util=97.77% 00:24:57.854 nvme1n1: ios=9825/0, merge=0/0, ticks=1235793/0, in_queue=1235793, util=97.96% 00:24:57.854 nvme2n1: ios=6825/0, merge=0/0, ticks=1227784/0, in_queue=1227784, util=98.24% 00:24:57.854 nvme3n1: ios=9868/0, merge=0/0, ticks=1234505/0, in_queue=1234505, util=98.22% 00:24:57.854 nvme4n1: ios=9842/0, merge=0/0, ticks=1236852/0, in_queue=1236852, util=98.47% 00:24:57.854 nvme5n1: ios=26292/0, merge=0/0, ticks=1238650/0, in_queue=1238650, util=98.42% 00:24:57.854 nvme6n1: ios=6849/0, merge=0/0, ticks=1228257/0, in_queue=1228257, util=98.57% 00:24:57.854 nvme7n1: ios=6820/0, merge=0/0, ticks=1228476/0, in_queue=1228476, util=98.85% 00:24:57.854 nvme8n1: ios=27659/0, merge=0/0, ticks=1241702/0, in_queue=1241702, util=99.11% 00:24:57.854 nvme9n1: ios=7698/0, merge=0/0, ticks=1227735/0, in_queue=1227735, util=99.05% 00:24:57.854 21:38:16 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:57.854 [global] 00:24:57.854 thread=1 00:24:57.854 invalidate=1 00:24:57.854 rw=randwrite 00:24:57.854 time_based=1 00:24:57.854 runtime=10 00:24:57.854 ioengine=libaio 00:24:57.854 direct=1 00:24:57.854 bs=262144 00:24:57.854 iodepth=64 00:24:57.854 norandommap=1 00:24:57.854 numjobs=1 00:24:57.854 00:24:57.854 [job0] 00:24:57.854 filename=/dev/nvme0n1 00:24:57.854 [job1] 00:24:57.854 filename=/dev/nvme10n1 00:24:57.854 [job2] 00:24:57.854 filename=/dev/nvme1n1 00:24:57.854 [job3] 00:24:57.854 filename=/dev/nvme2n1 00:24:57.854 [job4] 00:24:57.854 filename=/dev/nvme3n1 00:24:57.854 [job5] 00:24:57.854 filename=/dev/nvme4n1 00:24:57.854 [job6] 00:24:57.854 filename=/dev/nvme5n1 00:24:57.854 [job7] 00:24:57.854 filename=/dev/nvme6n1 00:24:57.854 [job8] 00:24:57.855 filename=/dev/nvme7n1 00:24:57.855 [job9] 00:24:57.855 filename=/dev/nvme8n1 00:24:57.855 [job10] 00:24:57.855 filename=/dev/nvme9n1 00:24:57.855 Could not set queue depth (nvme0n1) 00:24:57.855 Could not set queue depth (nvme10n1) 00:24:57.855 Could not set queue depth (nvme1n1) 00:24:57.855 Could not set queue depth (nvme2n1) 00:24:57.855 Could not set queue depth (nvme3n1) 00:24:57.855 Could not set queue depth (nvme4n1) 00:24:57.855 Could not set queue depth (nvme5n1) 00:24:57.855 Could not set queue depth (nvme6n1) 00:24:57.855 Could not set queue depth (nvme7n1) 00:24:57.855 Could not set queue depth (nvme8n1) 00:24:57.855 Could not set queue depth (nvme9n1) 00:24:57.855 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:57.855 fio-3.35 00:24:57.855 Starting 11 threads 00:25:07.813 00:25:07.813 job0: (groupid=0, jobs=1): err= 0: pid=79199: Thu Jul 11 21:38:27 2024 00:25:07.813 write: IOPS=413, BW=103MiB/s (108MB/s)(1047MiB/10135msec); 0 zone resets 00:25:07.813 slat (usec): min=21, max=12286, avg=2382.71, stdev=4080.38 00:25:07.813 clat (msec): min=12, max=283, avg=152.40, stdev=17.11 00:25:07.813 lat (msec): min=12, max=283, avg=154.78, stdev=16.88 00:25:07.813 clat percentiles (msec): 00:25:07.813 | 1.00th=[ 87], 5.00th=[ 124], 10.00th=[ 146], 20.00th=[ 148], 00:25:07.813 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:25:07.813 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 163], 00:25:07.813 | 99.00th=[ 188], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:25:07.813 | 99.99th=[ 284] 00:25:07.813 bw ( KiB/s): min=102400, max=129282, per=6.97%, avg=105627.85, stdev=6041.27, samples=20 00:25:07.813 iops : min= 400, max= 505, avg=412.60, stdev=23.60, samples=20 00:25:07.813 lat (msec) : 20=0.10%, 50=0.48%, 100=0.57%, 250=98.52%, 500=0.33% 00:25:07.813 cpu : usr=0.96%, sys=1.26%, ctx=4327, majf=0, minf=1 00:25:07.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:07.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.813 issued rwts: total=0,4189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.813 job1: (groupid=0, jobs=1): err= 0: pid=79200: Thu Jul 11 21:38:27 2024 00:25:07.813 write: IOPS=414, BW=104MiB/s (109MB/s)(1050MiB/10139msec); 0 zone resets 00:25:07.813 slat (usec): min=20, max=12946, avg=2376.32, stdev=4071.14 00:25:07.813 clat (msec): min=6, max=288, avg=152.11, stdev=17.63 00:25:07.813 lat (msec): min=6, max=288, avg=154.49, stdev=17.42 00:25:07.813 clat percentiles (msec): 00:25:07.813 | 1.00th=[ 82], 5.00th=[ 124], 10.00th=[ 144], 20.00th=[ 148], 00:25:07.813 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:25:07.813 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:25:07.813 | 99.00th=[ 192], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:25:07.813 | 99.99th=[ 288] 00:25:07.813 bw ( KiB/s): min=102195, max=132096, per=6.99%, avg=105897.85, stdev=6585.59, samples=20 00:25:07.813 iops : min= 399, max= 516, avg=413.30, stdev=25.83, samples=20 00:25:07.813 lat (msec) : 10=0.05%, 20=0.10%, 50=0.38%, 100=0.67%, 250=98.38% 00:25:07.813 lat (msec) : 500=0.43% 00:25:07.813 cpu : usr=1.08%, sys=1.32%, ctx=5205, majf=0, minf=1 00:25:07.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:07.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.813 issued rwts: total=0,4198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.813 job2: (groupid=0, jobs=1): err= 0: pid=79207: Thu Jul 11 21:38:27 2024 00:25:07.813 write: IOPS=414, BW=104MiB/s (109MB/s)(1051MiB/10135msec); 0 zone resets 00:25:07.813 slat (usec): min=25, max=44539, avg=2374.43, stdev=4116.67 00:25:07.813 clat (msec): min=47, max=277, avg=151.85, stdev=12.01 00:25:07.813 lat (msec): min=47, max=277, avg=154.23, stdev=11.48 00:25:07.813 clat percentiles (msec): 00:25:07.813 | 1.00th=[ 117], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 00:25:07.813 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:25:07.813 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 00:25:07.813 | 99.00th=[ 174], 99.50th=[ 230], 99.90th=[ 268], 99.95th=[ 268], 00:25:07.813 | 99.99th=[ 279] 00:25:07.813 bw ( KiB/s): min=99840, max=108544, per=6.99%, avg=105967.55, stdev=1963.25, samples=20 00:25:07.813 iops : min= 390, max= 424, avg=413.90, stdev= 7.69, samples=20 00:25:07.813 lat (msec) : 50=0.07%, 100=0.67%, 250=99.02%, 500=0.24% 00:25:07.813 cpu : usr=1.00%, sys=1.32%, ctx=4906, majf=0, minf=1 00:25:07.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:07.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.813 issued rwts: total=0,4204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.813 job3: (groupid=0, jobs=1): err= 0: pid=79213: Thu Jul 11 21:38:27 2024 00:25:07.813 write: IOPS=732, BW=183MiB/s (192MB/s)(1846MiB/10079msec); 0 zone resets 00:25:07.813 slat (usec): min=20, max=30349, avg=1336.74, stdev=2275.88 00:25:07.813 clat (msec): min=27, max=159, avg=86.00, stdev= 6.21 00:25:07.813 lat (msec): min=29, max=159, avg=87.34, stdev= 5.96 00:25:07.813 clat percentiles (msec): 00:25:07.813 | 1.00th=[ 66], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 84], 00:25:07.813 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 87], 60.00th=[ 88], 00:25:07.813 | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 89], 95.00th=[ 90], 00:25:07.813 | 99.00th=[ 99], 99.50th=[ 114], 99.90th=[ 148], 99.95th=[ 155], 00:25:07.813 | 99.99th=[ 161] 00:25:07.813 bw ( KiB/s): min=174080, max=200192, per=12.36%, avg=187335.85, stdev=4598.69, samples=20 00:25:07.813 iops : min= 680, max= 782, avg=731.70, stdev=17.97, samples=20 00:25:07.813 lat (msec) : 50=0.62%, 100=98.48%, 250=0.89% 00:25:07.813 cpu : usr=1.63%, sys=1.99%, ctx=9889, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,7383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 job4: (groupid=0, jobs=1): err= 0: pid=79214: Thu Jul 11 21:38:27 2024 00:25:07.814 write: IOPS=730, BW=183MiB/s (192MB/s)(1842MiB/10081msec); 0 zone resets 00:25:07.814 slat (usec): min=20, max=9458, avg=1352.04, stdev=2274.87 00:25:07.814 clat (msec): min=11, max=162, avg=86.17, stdev= 6.84 00:25:07.814 lat (msec): min=11, max=162, avg=87.53, stdev= 6.58 00:25:07.814 clat percentiles (msec): 00:25:07.814 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 84], 00:25:07.814 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 87], 60.00th=[ 88], 00:25:07.814 | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 89], 95.00th=[ 90], 00:25:07.814 | 99.00th=[ 105], 99.50th=[ 114], 99.90th=[ 153], 99.95th=[ 157], 00:25:07.814 | 99.99th=[ 163] 00:25:07.814 bw ( KiB/s): min=178688, max=190976, per=12.34%, avg=186996.25, stdev=2554.85, samples=20 00:25:07.814 iops : min= 698, max= 746, avg=730.35, stdev=10.01, samples=20 00:25:07.814 lat (msec) : 20=0.16%, 50=0.43%, 100=98.01%, 250=1.40% 00:25:07.814 cpu : usr=1.62%, sys=2.10%, ctx=6853, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,7369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 job5: (groupid=0, jobs=1): err= 0: pid=79215: Thu Jul 11 21:38:27 2024 00:25:07.814 write: IOPS=416, BW=104MiB/s (109MB/s)(1056MiB/10139msec); 0 zone resets 00:25:07.814 slat (usec): min=25, max=24519, avg=2362.80, stdev=4062.19 00:25:07.814 clat (msec): min=9, max=281, avg=151.16, stdev=15.67 00:25:07.814 lat (msec): min=9, max=281, avg=153.52, stdev=15.38 00:25:07.814 clat percentiles (msec): 00:25:07.814 | 1.00th=[ 71], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 00:25:07.814 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:25:07.814 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 00:25:07.814 | 99.00th=[ 178], 99.50th=[ 234], 99.90th=[ 271], 99.95th=[ 271], 00:25:07.814 | 99.99th=[ 284] 00:25:07.814 bw ( KiB/s): min=104448, max=114176, per=7.03%, avg=106536.55, stdev=2295.89, samples=20 00:25:07.814 iops : min= 408, max= 446, avg=416.15, stdev= 8.97, samples=20 00:25:07.814 lat (msec) : 10=0.07%, 20=0.09%, 50=0.47%, 100=0.76%, 250=98.27% 00:25:07.814 lat (msec) : 500=0.33% 00:25:07.814 cpu : usr=1.12%, sys=1.26%, ctx=4905, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,4225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 job6: (groupid=0, jobs=1): err= 0: pid=79216: Thu Jul 11 21:38:27 2024 00:25:07.814 write: IOPS=430, BW=108MiB/s (113MB/s)(1092MiB/10139msec); 0 zone resets 00:25:07.814 slat (usec): min=26, max=11698, avg=2268.10, stdev=3965.25 00:25:07.814 clat (msec): min=13, max=284, avg=146.20, stdev=27.20 00:25:07.814 lat (msec): min=13, max=284, avg=148.47, stdev=27.37 00:25:07.814 clat percentiles (msec): 00:25:07.814 | 1.00th=[ 55], 5.00th=[ 85], 10.00th=[ 90], 20.00th=[ 146], 00:25:07.814 | 30.00th=[ 150], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:25:07.814 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:25:07.814 | 99.00th=[ 180], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:25:07.814 | 99.99th=[ 284] 00:25:07.814 bw ( KiB/s): min=102195, max=182272, per=7.27%, avg=110212.50, stdev=19621.28, samples=20 00:25:07.814 iops : min= 399, max= 712, avg=430.45, stdev=76.67, samples=20 00:25:07.814 lat (msec) : 20=0.18%, 50=0.71%, 100=11.90%, 250=86.88%, 500=0.32% 00:25:07.814 cpu : usr=0.99%, sys=1.55%, ctx=5280, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,4369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 job7: (groupid=0, jobs=1): err= 0: pid=79217: Thu Jul 11 21:38:27 2024 00:25:07.814 write: IOPS=413, BW=103MiB/s (108MB/s)(1046MiB/10128msec); 0 zone resets 00:25:07.814 slat (usec): min=19, max=71261, avg=2385.91, stdev=4232.15 00:25:07.814 clat (msec): min=76, max=273, avg=152.45, stdev=10.70 00:25:07.814 lat (msec): min=76, max=273, avg=154.84, stdev= 9.99 00:25:07.814 clat percentiles (msec): 00:25:07.814 | 1.00th=[ 126], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 00:25:07.814 | 30.00th=[ 153], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:25:07.814 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 00:25:07.814 | 99.00th=[ 180], 99.50th=[ 226], 99.90th=[ 264], 99.95th=[ 264], 00:25:07.814 | 99.99th=[ 275] 00:25:07.814 bw ( KiB/s): min=92160, max=108544, per=6.96%, avg=105455.20, stdev=3408.75, samples=20 00:25:07.814 iops : min= 360, max= 424, avg=411.90, stdev=13.31, samples=20 00:25:07.814 lat (msec) : 100=0.38%, 250=99.38%, 500=0.24% 00:25:07.814 cpu : usr=1.08%, sys=1.00%, ctx=3467, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,4184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 job8: (groupid=0, jobs=1): err= 0: pid=79218: Thu Jul 11 21:38:27 2024 00:25:07.814 write: IOPS=414, BW=104MiB/s (109MB/s)(1052MiB/10143msec); 0 zone resets 00:25:07.814 slat (usec): min=20, max=67066, avg=2373.00, stdev=4184.63 00:25:07.814 clat (msec): min=7, max=287, avg=151.86, stdev=17.04 00:25:07.814 lat (msec): min=7, max=287, avg=154.24, stdev=16.77 00:25:07.814 clat percentiles (msec): 00:25:07.814 | 1.00th=[ 62], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 00:25:07.814 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:25:07.814 | 70.00th=[ 157], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 161], 00:25:07.814 | 99.00th=[ 192], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:25:07.814 | 99.99th=[ 288] 00:25:07.814 bw ( KiB/s): min=104448, max=108544, per=7.00%, avg=106086.40, stdev=1282.42, samples=20 00:25:07.814 iops : min= 408, max= 424, avg=414.40, stdev= 5.01, samples=20 00:25:07.814 lat (msec) : 10=0.10%, 20=0.26%, 50=0.40%, 100=0.67%, 250=98.24% 00:25:07.814 lat (msec) : 500=0.33% 00:25:07.814 cpu : usr=0.96%, sys=1.24%, ctx=3932, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,4207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 job9: (groupid=0, jobs=1): err= 0: pid=79219: Thu Jul 11 21:38:27 2024 00:25:07.814 write: IOPS=1145, BW=286MiB/s (300MB/s)(2878MiB/10048msec); 0 zone resets 00:25:07.814 slat (usec): min=17, max=11504, avg=863.59, stdev=1445.62 00:25:07.814 clat (msec): min=18, max=101, avg=54.97, stdev= 7.18 00:25:07.814 lat (msec): min=18, max=101, avg=55.83, stdev= 7.21 00:25:07.814 clat percentiles (msec): 00:25:07.814 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 52], 20.00th=[ 52], 00:25:07.814 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 54], 60.00th=[ 55], 00:25:07.814 | 70.00th=[ 55], 80.00th=[ 56], 90.00th=[ 56], 95.00th=[ 58], 00:25:07.814 | 99.00th=[ 90], 99.50th=[ 90], 99.90th=[ 94], 99.95th=[ 99], 00:25:07.814 | 99.99th=[ 102] 00:25:07.814 bw ( KiB/s): min=178176, max=305152, per=19.34%, avg=292969.60, stdev=28853.98, samples=20 00:25:07.814 iops : min= 696, max= 1192, avg=1144.30, stdev=112.68, samples=20 00:25:07.814 lat (msec) : 20=0.03%, 50=0.52%, 100=99.43%, 250=0.02% 00:25:07.814 cpu : usr=2.21%, sys=2.98%, ctx=15835, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,11513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 job10: (groupid=0, jobs=1): err= 0: pid=79220: Thu Jul 11 21:38:27 2024 00:25:07.814 write: IOPS=413, BW=103MiB/s (108MB/s)(1050MiB/10144msec); 0 zone resets 00:25:07.814 slat (usec): min=22, max=15309, avg=2377.71, stdev=4070.84 00:25:07.814 clat (msec): min=13, max=290, avg=152.21, stdev=17.37 00:25:07.814 lat (msec): min=13, max=290, avg=154.58, stdev=17.15 00:25:07.814 clat percentiles (msec): 00:25:07.814 | 1.00th=[ 87], 5.00th=[ 124], 10.00th=[ 144], 20.00th=[ 148], 00:25:07.814 | 30.00th=[ 150], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:25:07.814 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 161], 00:25:07.814 | 99.00th=[ 194], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 279], 00:25:07.814 | 99.99th=[ 292] 00:25:07.814 bw ( KiB/s): min=102400, max=129024, per=6.99%, avg=105845.80, stdev=6056.84, samples=20 00:25:07.814 iops : min= 400, max= 504, avg=413.35, stdev=23.67, samples=20 00:25:07.814 lat (msec) : 20=0.14%, 50=0.43%, 100=0.57%, 250=98.43%, 500=0.43% 00:25:07.814 cpu : usr=1.00%, sys=1.33%, ctx=5198, majf=0, minf=1 00:25:07.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:07.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.814 issued rwts: total=0,4198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.814 00:25:07.814 Run status group 0 (all jobs): 00:25:07.814 WRITE: bw=1480MiB/s (1552MB/s), 103MiB/s-286MiB/s (108MB/s-300MB/s), io=14.7GiB (15.7GB), run=10048-10144msec 00:25:07.814 00:25:07.814 Disk stats (read/write): 00:25:07.814 nvme0n1: ios=50/8253, merge=0/0, ticks=53/1213607, in_queue=1213660, util=98.05% 00:25:07.814 nvme10n1: ios=49/8279, merge=0/0, ticks=30/1214657, in_queue=1214687, util=98.12% 00:25:07.814 nvme1n1: ios=48/8273, merge=0/0, ticks=37/1213321, in_queue=1213358, util=98.14% 00:25:07.814 nvme2n1: ios=47/14633, merge=0/0, ticks=37/1217040, in_queue=1217077, util=98.37% 00:25:07.814 nvme3n1: ios=37/14614, merge=0/0, ticks=34/1217346, in_queue=1217380, util=98.35% 00:25:07.814 nvme4n1: ios=0/8321, merge=0/0, ticks=0/1213737, in_queue=1213737, util=98.30% 00:25:07.814 nvme5n1: ios=0/8614, merge=0/0, ticks=0/1214484, in_queue=1214484, util=98.43% 00:25:07.814 nvme6n1: ios=0/8228, merge=0/0, ticks=0/1212669, in_queue=1212669, util=98.28% 00:25:07.814 nvme7n1: ios=0/8302, merge=0/0, ticks=0/1215889, in_queue=1215889, util=98.85% 00:25:07.814 nvme8n1: ios=0/22873, merge=0/0, ticks=0/1217610, in_queue=1217610, util=98.65% 00:25:07.814 nvme9n1: ios=0/8280, merge=0/0, ticks=0/1215620, in_queue=1215620, util=99.00% 00:25:07.814 21:38:27 -- target/multiconnection.sh@36 -- # sync 00:25:07.814 21:38:27 -- target/multiconnection.sh@37 -- # seq 1 11 00:25:07.814 21:38:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.814 21:38:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:07.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:07.814 21:38:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:07.814 21:38:27 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.814 21:38:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.814 21:38:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:25:07.814 21:38:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.814 21:38:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:25:07.814 21:38:27 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.814 21:38:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.814 21:38:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.814 21:38:27 -- common/autotest_common.sh@10 -- # set +x 00:25:07.814 21:38:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.814 21:38:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.814 21:38:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:07.814 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:07.814 21:38:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:07.814 21:38:27 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.814 21:38:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.814 21:38:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:25:07.814 21:38:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.814 21:38:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:25:07.814 21:38:27 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.814 21:38:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:07.814 21:38:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.814 21:38:27 -- common/autotest_common.sh@10 -- # set +x 00:25:07.814 21:38:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.814 21:38:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.814 21:38:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:07.814 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:07.814 21:38:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:07.814 21:38:27 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.814 21:38:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:25:07.814 21:38:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.814 21:38:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.814 21:38:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:25:07.814 21:38:27 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.814 21:38:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:07.814 21:38:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.814 21:38:27 -- common/autotest_common.sh@10 -- # set +x 00:25:07.814 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.814 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.814 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:07.814 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:07.814 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:07.814 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.814 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:25:07.814 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.814 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:25:07.814 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.814 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.814 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:07.814 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.814 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.814 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.814 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.814 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:07.814 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:07.814 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:07.814 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.814 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.814 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:25:07.814 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.814 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:25:07.814 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.814 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:07.814 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.814 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.814 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.814 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.814 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:07.814 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:07.814 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:07.814 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.814 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.814 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:25:07.814 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.814 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:25:07.814 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.814 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:07.814 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.814 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.814 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.814 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.814 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:07.815 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:07.815 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:07.815 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:25:07.815 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.815 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:07.815 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.815 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.815 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.815 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.815 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:07.815 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:07.815 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:07.815 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:25:07.815 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.815 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:07.815 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.815 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.815 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.815 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.815 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:07.815 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:07.815 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:07.815 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:25:07.815 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.815 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:07.815 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.815 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.815 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.815 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.815 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:07.815 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:07.815 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:07.815 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.815 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:07.815 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.815 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.815 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.815 21:38:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.815 21:38:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:07.815 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:07.815 21:38:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:07.815 21:38:28 -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:25:07.815 21:38:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:07.815 21:38:28 -- common/autotest_common.sh@1210 -- # return 0 00:25:07.815 21:38:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:07.815 21:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.815 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.815 21:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.815 21:38:28 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:07.815 21:38:28 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:07.815 21:38:28 -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:07.815 21:38:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.815 21:38:28 -- nvmf/common.sh@116 -- # sync 00:25:07.815 21:38:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:07.815 21:38:28 -- nvmf/common.sh@119 -- # set +e 00:25:07.815 21:38:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.815 21:38:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:07.815 rmmod nvme_tcp 00:25:07.815 rmmod nvme_fabrics 00:25:07.815 rmmod nvme_keyring 00:25:08.073 21:38:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:08.073 21:38:28 -- nvmf/common.sh@123 -- # set -e 00:25:08.073 21:38:28 -- nvmf/common.sh@124 -- # return 0 00:25:08.073 21:38:28 -- nvmf/common.sh@477 -- # '[' -n 78526 ']' 00:25:08.073 21:38:28 -- nvmf/common.sh@478 -- # killprocess 78526 00:25:08.073 21:38:28 -- common/autotest_common.sh@926 -- # '[' -z 78526 ']' 00:25:08.073 21:38:28 -- common/autotest_common.sh@930 -- # kill -0 78526 00:25:08.073 21:38:28 -- common/autotest_common.sh@931 -- # uname 00:25:08.073 21:38:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:08.074 21:38:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78526 00:25:08.074 killing process with pid 78526 00:25:08.074 21:38:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:08.074 21:38:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:08.074 21:38:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78526' 00:25:08.074 21:38:28 -- common/autotest_common.sh@945 -- # kill 78526 00:25:08.074 21:38:28 -- common/autotest_common.sh@950 -- # wait 78526 00:25:08.640 21:38:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:08.640 21:38:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:08.640 21:38:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:08.640 21:38:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.640 21:38:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:08.640 21:38:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.640 21:38:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.640 21:38:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.640 21:38:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:08.640 00:25:08.640 real 0m49.059s 00:25:08.640 user 2m43.791s 00:25:08.640 sys 0m32.627s 00:25:08.640 21:38:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.640 21:38:29 -- common/autotest_common.sh@10 -- # set +x 00:25:08.640 ************************************ 00:25:08.640 END TEST nvmf_multiconnection 00:25:08.640 ************************************ 00:25:08.640 21:38:29 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:08.640 21:38:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:08.640 21:38:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.640 21:38:29 -- common/autotest_common.sh@10 -- # set +x 00:25:08.640 ************************************ 00:25:08.640 START TEST nvmf_initiator_timeout 00:25:08.640 ************************************ 00:25:08.640 21:38:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:08.640 * Looking for test storage... 00:25:08.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:08.640 21:38:29 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:08.640 21:38:29 -- nvmf/common.sh@7 -- # uname -s 00:25:08.640 21:38:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.640 21:38:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.640 21:38:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.640 21:38:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.640 21:38:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.640 21:38:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.640 21:38:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.640 21:38:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.640 21:38:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.640 21:38:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.640 21:38:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:25:08.640 21:38:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:25:08.641 21:38:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.641 21:38:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.641 21:38:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:08.641 21:38:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:08.641 21:38:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.641 21:38:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.641 21:38:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.641 21:38:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.641 21:38:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.641 21:38:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.641 21:38:29 -- paths/export.sh@5 -- # export PATH 00:25:08.641 21:38:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.641 21:38:29 -- nvmf/common.sh@46 -- # : 0 00:25:08.641 21:38:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:08.641 21:38:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:08.641 21:38:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:08.641 21:38:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.641 21:38:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.641 21:38:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:08.641 21:38:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:08.641 21:38:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:08.641 21:38:29 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.641 21:38:29 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.641 21:38:29 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:08.641 21:38:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:08.641 21:38:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.641 21:38:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:08.641 21:38:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:08.641 21:38:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:08.641 21:38:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.641 21:38:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.641 21:38:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.641 21:38:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:08.641 21:38:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:08.641 21:38:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:08.641 21:38:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:08.641 21:38:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:08.641 21:38:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:08.641 21:38:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.641 21:38:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.641 21:38:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:08.641 21:38:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:08.641 21:38:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:08.641 21:38:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:08.641 21:38:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:08.641 21:38:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.641 21:38:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:08.641 21:38:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:08.641 21:38:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:08.641 21:38:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:08.641 21:38:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:08.641 21:38:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:08.641 Cannot find device "nvmf_tgt_br" 00:25:08.641 21:38:29 -- nvmf/common.sh@154 -- # true 00:25:08.641 21:38:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:08.641 Cannot find device "nvmf_tgt_br2" 00:25:08.641 21:38:29 -- nvmf/common.sh@155 -- # true 00:25:08.641 21:38:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:08.641 21:38:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:08.641 Cannot find device "nvmf_tgt_br" 00:25:08.641 21:38:29 -- nvmf/common.sh@157 -- # true 00:25:08.641 21:38:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:08.641 Cannot find device "nvmf_tgt_br2" 00:25:08.641 21:38:29 -- nvmf/common.sh@158 -- # true 00:25:08.641 21:38:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:08.641 21:38:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:08.899 21:38:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:08.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:08.899 21:38:29 -- nvmf/common.sh@161 -- # true 00:25:08.899 21:38:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:08.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:08.899 21:38:29 -- nvmf/common.sh@162 -- # true 00:25:08.899 21:38:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:08.899 21:38:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:08.899 21:38:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:08.899 21:38:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:08.899 21:38:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:08.899 21:38:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:08.899 21:38:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:08.899 21:38:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:08.899 21:38:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:08.899 21:38:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:08.899 21:38:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:08.899 21:38:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:08.899 21:38:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:08.899 21:38:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:08.899 21:38:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:08.899 21:38:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:08.899 21:38:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:08.899 21:38:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:08.899 21:38:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:08.899 21:38:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:08.899 21:38:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:08.899 21:38:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:08.899 21:38:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:08.899 21:38:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:08.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:25:08.899 00:25:08.899 --- 10.0.0.2 ping statistics --- 00:25:08.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.899 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:08.899 21:38:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:08.899 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:08.899 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:25:08.899 00:25:08.899 --- 10.0.0.3 ping statistics --- 00:25:08.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.899 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:08.899 21:38:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:08.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:08.899 00:25:08.899 --- 10.0.0.1 ping statistics --- 00:25:08.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.899 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:08.899 21:38:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.899 21:38:29 -- nvmf/common.sh@421 -- # return 0 00:25:08.900 21:38:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:08.900 21:38:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.900 21:38:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:08.900 21:38:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:08.900 21:38:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.900 21:38:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:08.900 21:38:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:08.900 21:38:29 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:08.900 21:38:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:08.900 21:38:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:08.900 21:38:29 -- common/autotest_common.sh@10 -- # set +x 00:25:08.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.900 21:38:29 -- nvmf/common.sh@469 -- # nvmfpid=79586 00:25:08.900 21:38:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:08.900 21:38:29 -- nvmf/common.sh@470 -- # waitforlisten 79586 00:25:08.900 21:38:29 -- common/autotest_common.sh@819 -- # '[' -z 79586 ']' 00:25:08.900 21:38:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.900 21:38:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:08.900 21:38:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.900 21:38:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:08.900 21:38:29 -- common/autotest_common.sh@10 -- # set +x 00:25:09.180 [2024-07-11 21:38:29.872282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:09.180 [2024-07-11 21:38:29.872588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.180 [2024-07-11 21:38:30.009638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:09.180 [2024-07-11 21:38:30.098827] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:09.180 [2024-07-11 21:38:30.099231] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.180 [2024-07-11 21:38:30.099284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.180 [2024-07-11 21:38:30.099415] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.180 [2024-07-11 21:38:30.099598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.180 [2024-07-11 21:38:30.099741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.180 [2024-07-11 21:38:30.099821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.180 [2024-07-11 21:38:30.099823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.114 21:38:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:10.114 21:38:30 -- common/autotest_common.sh@852 -- # return 0 00:25:10.114 21:38:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:10.114 21:38:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:10.114 21:38:30 -- common/autotest_common.sh@10 -- # set +x 00:25:10.114 21:38:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:10.114 21:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.114 21:38:30 -- common/autotest_common.sh@10 -- # set +x 00:25:10.114 Malloc0 00:25:10.114 21:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:10.114 21:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.114 21:38:30 -- common/autotest_common.sh@10 -- # set +x 00:25:10.114 Delay0 00:25:10.114 21:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:10.114 21:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.114 21:38:30 -- common/autotest_common.sh@10 -- # set +x 00:25:10.114 [2024-07-11 21:38:30.914714] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.114 21:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:10.114 21:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.114 21:38:30 -- common/autotest_common.sh@10 -- # set +x 00:25:10.114 21:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:10.114 21:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.114 21:38:30 -- common/autotest_common.sh@10 -- # set +x 00:25:10.114 21:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.114 21:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.114 21:38:30 -- common/autotest_common.sh@10 -- # set +x 00:25:10.114 [2024-07-11 21:38:30.943110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.114 21:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.114 21:38:30 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:10.372 21:38:31 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:10.372 21:38:31 -- common/autotest_common.sh@1177 -- # local i=0 00:25:10.372 21:38:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.372 21:38:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:10.372 21:38:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:12.270 21:38:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:12.270 21:38:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:12.270 21:38:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:12.270 21:38:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:12.270 21:38:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.270 21:38:33 -- common/autotest_common.sh@1187 -- # return 0 00:25:12.270 21:38:33 -- target/initiator_timeout.sh@35 -- # fio_pid=79650 00:25:12.270 21:38:33 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:12.270 21:38:33 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:12.270 [global] 00:25:12.270 thread=1 00:25:12.270 invalidate=1 00:25:12.270 rw=write 00:25:12.270 time_based=1 00:25:12.270 runtime=60 00:25:12.270 ioengine=libaio 00:25:12.270 direct=1 00:25:12.270 bs=4096 00:25:12.270 iodepth=1 00:25:12.270 norandommap=0 00:25:12.270 numjobs=1 00:25:12.270 00:25:12.270 verify_dump=1 00:25:12.270 verify_backlog=512 00:25:12.270 verify_state_save=0 00:25:12.270 do_verify=1 00:25:12.270 verify=crc32c-intel 00:25:12.270 [job0] 00:25:12.270 filename=/dev/nvme0n1 00:25:12.270 Could not set queue depth (nvme0n1) 00:25:12.527 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:12.527 fio-3.35 00:25:12.527 Starting 1 thread 00:25:15.809 21:38:36 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:15.809 21:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.809 21:38:36 -- common/autotest_common.sh@10 -- # set +x 00:25:15.809 true 00:25:15.809 21:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.809 21:38:36 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:15.809 21:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.809 21:38:36 -- common/autotest_common.sh@10 -- # set +x 00:25:15.809 true 00:25:15.809 21:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.809 21:38:36 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:15.809 21:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.809 21:38:36 -- common/autotest_common.sh@10 -- # set +x 00:25:15.809 true 00:25:15.809 21:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.809 21:38:36 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:15.809 21:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.809 21:38:36 -- common/autotest_common.sh@10 -- # set +x 00:25:15.809 true 00:25:15.809 21:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.809 21:38:36 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:18.336 21:38:39 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:18.336 21:38:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.337 21:38:39 -- common/autotest_common.sh@10 -- # set +x 00:25:18.337 true 00:25:18.337 21:38:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.337 21:38:39 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:18.337 21:38:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.337 21:38:39 -- common/autotest_common.sh@10 -- # set +x 00:25:18.337 true 00:25:18.337 21:38:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.337 21:38:39 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:18.337 21:38:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.337 21:38:39 -- common/autotest_common.sh@10 -- # set +x 00:25:18.337 true 00:25:18.337 21:38:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.337 21:38:39 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:18.337 21:38:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.337 21:38:39 -- common/autotest_common.sh@10 -- # set +x 00:25:18.337 true 00:25:18.337 21:38:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.337 21:38:39 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:18.337 21:38:39 -- target/initiator_timeout.sh@54 -- # wait 79650 00:26:14.540 00:26:14.540 job0: (groupid=0, jobs=1): err= 0: pid=79677: Thu Jul 11 21:39:33 2024 00:26:14.540 read: IOPS=764, BW=3059KiB/s (3133kB/s)(179MiB/60000msec) 00:26:14.540 slat (usec): min=11, max=139, avg=14.25, stdev= 3.11 00:26:14.540 clat (usec): min=150, max=1842, avg=215.59, stdev=27.88 00:26:14.540 lat (usec): min=178, max=1865, avg=229.84, stdev=28.63 00:26:14.540 clat percentiles (usec): 00:26:14.540 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:26:14.540 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:26:14.540 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 262], 00:26:14.540 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 343], 99.95th=[ 478], 00:26:14.540 | 99.99th=[ 848] 00:26:14.540 write: IOPS=768, BW=3072KiB/s (3146kB/s)(180MiB/60000msec); 0 zone resets 00:26:14.540 slat (usec): min=13, max=14474, avg=21.44, stdev=76.51 00:26:14.540 clat (usec): min=123, max=40690k, avg=1048.20, stdev=189553.77 00:26:14.540 lat (usec): min=143, max=40690k, avg=1069.64, stdev=189553.78 00:26:14.540 clat percentiles (usec): 00:26:14.540 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:26:14.540 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:26:14.540 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 204], 00:26:14.540 | 99.00th=[ 225], 99.50th=[ 239], 99.90th=[ 265], 99.95th=[ 277], 00:26:14.540 | 99.99th=[ 594] 00:26:14.540 bw ( KiB/s): min= 48, max=12168, per=100.00%, avg=9242.26, stdev=1964.58, samples=39 00:26:14.540 iops : min= 12, max= 3042, avg=2310.56, stdev=491.14, samples=39 00:26:14.540 lat (usec) : 250=95.51%, 500=4.46%, 750=0.02%, 1000=0.01% 00:26:14.540 lat (msec) : 2=0.01%, >=2000=0.01% 00:26:14.540 cpu : usr=0.62%, sys=2.04%, ctx=91981, majf=0, minf=2 00:26:14.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.540 issued rwts: total=45891,46080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:14.540 00:26:14.540 Run status group 0 (all jobs): 00:26:14.540 READ: bw=3059KiB/s (3133kB/s), 3059KiB/s-3059KiB/s (3133kB/s-3133kB/s), io=179MiB (188MB), run=60000-60000msec 00:26:14.540 WRITE: bw=3072KiB/s (3146kB/s), 3072KiB/s-3072KiB/s (3146kB/s-3146kB/s), io=180MiB (189MB), run=60000-60000msec 00:26:14.540 00:26:14.540 Disk stats (read/write): 00:26:14.540 nvme0n1: ios=45804/45938, merge=0/0, ticks=10019/7973, in_queue=17992, util=99.84% 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:14.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:14.540 21:39:33 -- common/autotest_common.sh@1198 -- # local i=0 00:26:14.540 21:39:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:26:14.540 21:39:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:14.540 21:39:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:14.540 21:39:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:14.540 nvmf hotplug test: fio successful as expected 00:26:14.540 21:39:33 -- common/autotest_common.sh@1210 -- # return 0 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.540 21:39:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.540 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.540 21:39:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:14.540 21:39:33 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:14.540 21:39:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:14.540 21:39:33 -- nvmf/common.sh@116 -- # sync 00:26:14.540 21:39:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:14.540 21:39:33 -- nvmf/common.sh@119 -- # set +e 00:26:14.540 21:39:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:14.540 21:39:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:14.540 rmmod nvme_tcp 00:26:14.540 rmmod nvme_fabrics 00:26:14.540 rmmod nvme_keyring 00:26:14.540 21:39:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:14.540 21:39:33 -- nvmf/common.sh@123 -- # set -e 00:26:14.540 21:39:33 -- nvmf/common.sh@124 -- # return 0 00:26:14.540 21:39:33 -- nvmf/common.sh@477 -- # '[' -n 79586 ']' 00:26:14.540 21:39:33 -- nvmf/common.sh@478 -- # killprocess 79586 00:26:14.540 21:39:33 -- common/autotest_common.sh@926 -- # '[' -z 79586 ']' 00:26:14.540 21:39:33 -- common/autotest_common.sh@930 -- # kill -0 79586 00:26:14.540 21:39:33 -- common/autotest_common.sh@931 -- # uname 00:26:14.540 21:39:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:14.540 21:39:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79586 00:26:14.540 killing process with pid 79586 00:26:14.540 21:39:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:14.540 21:39:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:14.540 21:39:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79586' 00:26:14.540 21:39:33 -- common/autotest_common.sh@945 -- # kill 79586 00:26:14.540 21:39:33 -- common/autotest_common.sh@950 -- # wait 79586 00:26:14.541 21:39:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:14.541 21:39:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:14.541 21:39:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:14.541 21:39:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:14.541 21:39:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:14.541 21:39:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.541 21:39:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.541 21:39:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.541 21:39:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:14.541 ************************************ 00:26:14.541 END TEST nvmf_initiator_timeout 00:26:14.541 ************************************ 00:26:14.541 00:26:14.541 real 1m4.483s 00:26:14.541 user 3m57.731s 00:26:14.541 sys 0m17.375s 00:26:14.541 21:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.541 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.541 21:39:33 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:26:14.541 21:39:33 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:26:14.541 21:39:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:14.541 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.541 21:39:33 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:26:14.541 21:39:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:14.541 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.541 21:39:33 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:26:14.541 21:39:33 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:14.541 21:39:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:14.541 21:39:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:14.541 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:26:14.541 ************************************ 00:26:14.541 START TEST nvmf_identify 00:26:14.541 ************************************ 00:26:14.541 21:39:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:14.541 * Looking for test storage... 00:26:14.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:14.541 21:39:34 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:14.541 21:39:34 -- nvmf/common.sh@7 -- # uname -s 00:26:14.541 21:39:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.541 21:39:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.541 21:39:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.541 21:39:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.541 21:39:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.541 21:39:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.541 21:39:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.541 21:39:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.541 21:39:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.541 21:39:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.541 21:39:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:26:14.541 21:39:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:26:14.541 21:39:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.541 21:39:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.541 21:39:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:14.541 21:39:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:14.541 21:39:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.541 21:39:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.541 21:39:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.541 21:39:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.541 21:39:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.541 21:39:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.541 21:39:34 -- paths/export.sh@5 -- # export PATH 00:26:14.541 21:39:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.541 21:39:34 -- nvmf/common.sh@46 -- # : 0 00:26:14.541 21:39:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:14.541 21:39:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:14.541 21:39:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:14.541 21:39:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.541 21:39:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.541 21:39:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:14.541 21:39:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:14.541 21:39:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:14.541 21:39:34 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.541 21:39:34 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.541 21:39:34 -- host/identify.sh@14 -- # nvmftestinit 00:26:14.541 21:39:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:14.541 21:39:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.541 21:39:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:14.541 21:39:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:14.541 21:39:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:14.541 21:39:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.541 21:39:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.541 21:39:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.541 21:39:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:14.541 21:39:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:14.541 21:39:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:14.541 21:39:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:14.541 21:39:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:14.541 21:39:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:14.541 21:39:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.541 21:39:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.541 21:39:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:14.541 21:39:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:14.541 21:39:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:14.541 21:39:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:14.541 21:39:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:14.541 21:39:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.541 21:39:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:14.541 21:39:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:14.541 21:39:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:14.541 21:39:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:14.541 21:39:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:14.541 21:39:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:14.541 Cannot find device "nvmf_tgt_br" 00:26:14.541 21:39:34 -- nvmf/common.sh@154 -- # true 00:26:14.541 21:39:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:14.541 Cannot find device "nvmf_tgt_br2" 00:26:14.541 21:39:34 -- nvmf/common.sh@155 -- # true 00:26:14.541 21:39:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:14.541 21:39:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:14.541 Cannot find device "nvmf_tgt_br" 00:26:14.541 21:39:34 -- nvmf/common.sh@157 -- # true 00:26:14.541 21:39:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:14.541 Cannot find device "nvmf_tgt_br2" 00:26:14.541 21:39:34 -- nvmf/common.sh@158 -- # true 00:26:14.541 21:39:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:14.541 21:39:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:14.541 21:39:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:14.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.541 21:39:34 -- nvmf/common.sh@161 -- # true 00:26:14.541 21:39:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:14.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.541 21:39:34 -- nvmf/common.sh@162 -- # true 00:26:14.541 21:39:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:14.541 21:39:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:14.541 21:39:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:14.541 21:39:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:14.541 21:39:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:14.541 21:39:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:14.541 21:39:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:14.541 21:39:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:14.541 21:39:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:14.541 21:39:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:14.541 21:39:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:14.541 21:39:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:14.541 21:39:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:14.541 21:39:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:14.541 21:39:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:14.541 21:39:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:14.541 21:39:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:14.541 21:39:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:14.541 21:39:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:14.541 21:39:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:14.541 21:39:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:14.541 21:39:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:14.542 21:39:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:14.542 21:39:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:14.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:26:14.542 00:26:14.542 --- 10.0.0.2 ping statistics --- 00:26:14.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.542 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:14.542 21:39:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:14.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:14.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:26:14.542 00:26:14.542 --- 10.0.0.3 ping statistics --- 00:26:14.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.542 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:14.542 21:39:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:14.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:26:14.542 00:26:14.542 --- 10.0.0.1 ping statistics --- 00:26:14.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.542 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:14.542 21:39:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.542 21:39:34 -- nvmf/common.sh@421 -- # return 0 00:26:14.542 21:39:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:14.542 21:39:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.542 21:39:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:14.542 21:39:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:14.542 21:39:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.542 21:39:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:14.542 21:39:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:14.542 21:39:34 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:14.542 21:39:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:14.542 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:26:14.542 21:39:34 -- host/identify.sh@19 -- # nvmfpid=80517 00:26:14.542 21:39:34 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:14.542 21:39:34 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.542 21:39:34 -- host/identify.sh@23 -- # waitforlisten 80517 00:26:14.542 21:39:34 -- common/autotest_common.sh@819 -- # '[' -z 80517 ']' 00:26:14.542 21:39:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.542 21:39:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:14.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.542 21:39:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.542 21:39:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:14.542 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:26:14.542 [2024-07-11 21:39:34.476519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:14.542 [2024-07-11 21:39:34.476883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.542 [2024-07-11 21:39:34.618043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:14.542 [2024-07-11 21:39:34.722195] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:14.542 [2024-07-11 21:39:34.722672] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.542 [2024-07-11 21:39:34.722880] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.542 [2024-07-11 21:39:34.723042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.542 [2024-07-11 21:39:34.723253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.542 [2024-07-11 21:39:34.723514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.542 [2024-07-11 21:39:34.723515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.542 [2024-07-11 21:39:34.723352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.542 21:39:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:14.542 21:39:35 -- common/autotest_common.sh@852 -- # return 0 00:26:14.542 21:39:35 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.542 21:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.542 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.542 [2024-07-11 21:39:35.442167] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.542 21:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.542 21:39:35 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:14.542 21:39:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:14.542 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 21:39:35 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.801 21:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.801 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 Malloc0 00:26:14.801 21:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.801 21:39:35 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.801 21:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.801 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 21:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.801 21:39:35 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:14.801 21:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.801 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 21:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.801 21:39:35 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.801 21:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.801 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 [2024-07-11 21:39:35.541051] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.801 21:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.801 21:39:35 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:14.801 21:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.801 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 21:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.801 21:39:35 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:14.801 21:39:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.801 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 [2024-07-11 21:39:35.556783] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:14.801 [ 00:26:14.801 { 00:26:14.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:14.801 "subtype": "Discovery", 00:26:14.801 "listen_addresses": [ 00:26:14.801 { 00:26:14.801 "transport": "TCP", 00:26:14.801 "trtype": "TCP", 00:26:14.801 "adrfam": "IPv4", 00:26:14.801 "traddr": "10.0.0.2", 00:26:14.801 "trsvcid": "4420" 00:26:14.801 } 00:26:14.801 ], 00:26:14.801 "allow_any_host": true, 00:26:14.801 "hosts": [] 00:26:14.801 }, 00:26:14.801 { 00:26:14.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.801 "subtype": "NVMe", 00:26:14.801 "listen_addresses": [ 00:26:14.801 { 00:26:14.801 "transport": "TCP", 00:26:14.801 "trtype": "TCP", 00:26:14.801 "adrfam": "IPv4", 00:26:14.801 "traddr": "10.0.0.2", 00:26:14.801 "trsvcid": "4420" 00:26:14.801 } 00:26:14.801 ], 00:26:14.801 "allow_any_host": true, 00:26:14.801 "hosts": [], 00:26:14.801 "serial_number": "SPDK00000000000001", 00:26:14.801 "model_number": "SPDK bdev Controller", 00:26:14.801 "max_namespaces": 32, 00:26:14.801 "min_cntlid": 1, 00:26:14.801 "max_cntlid": 65519, 00:26:14.801 "namespaces": [ 00:26:14.801 { 00:26:14.801 "nsid": 1, 00:26:14.801 "bdev_name": "Malloc0", 00:26:14.801 "name": "Malloc0", 00:26:14.801 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:14.801 "eui64": "ABCDEF0123456789", 00:26:14.801 "uuid": "77cf0757-62dd-45b1-9f83-d8bebab4bf79" 00:26:14.801 } 00:26:14.801 ] 00:26:14.801 } 00:26:14.801 ] 00:26:14.801 21:39:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.801 21:39:35 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:14.801 [2024-07-11 21:39:35.592371] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:14.801 [2024-07-11 21:39:35.592700] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80552 ] 00:26:14.801 [2024-07-11 21:39:35.734859] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:14.801 [2024-07-11 21:39:35.734950] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:14.801 [2024-07-11 21:39:35.734958] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:14.801 [2024-07-11 21:39:35.734974] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:14.801 [2024-07-11 21:39:35.734990] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:26:14.801 [2024-07-11 21:39:35.735178] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:14.801 [2024-07-11 21:39:35.735237] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dbcd70 0 00:26:14.801 [2024-07-11 21:39:35.747518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:14.801 [2024-07-11 21:39:35.747585] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:14.801 [2024-07-11 21:39:35.747594] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:14.801 [2024-07-11 21:39:35.747598] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:14.801 [2024-07-11 21:39:35.747746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.801 [2024-07-11 21:39:35.747755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.801 [2024-07-11 21:39:35.747759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:14.801 [2024-07-11 21:39:35.747776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:14.801 [2024-07-11 21:39:35.747815] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.066 [2024-07-11 21:39:35.755507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.066 [2024-07-11 21:39:35.755540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.066 [2024-07-11 21:39:35.755546] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.066 [2024-07-11 21:39:35.755551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.066 [2024-07-11 21:39:35.755571] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:15.066 [2024-07-11 21:39:35.755584] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:15.066 [2024-07-11 21:39:35.755591] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:15.066 [2024-07-11 21:39:35.755611] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.066 [2024-07-11 21:39:35.755617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.066 [2024-07-11 21:39:35.755621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.755634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.067 [2024-07-11 21:39:35.755668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.755753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.755761] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.755764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.755769] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.755776] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:15.067 [2024-07-11 21:39:35.755785] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:15.067 [2024-07-11 21:39:35.755793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.755797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.755801] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.755808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.067 [2024-07-11 21:39:35.755828] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.755878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.755885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.755889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.755893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.755902] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:15.067 [2024-07-11 21:39:35.755911] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:15.067 [2024-07-11 21:39:35.755919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.755923] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.755927] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.755935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.067 [2024-07-11 21:39:35.755952] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.756004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.756011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.756014] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756018] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.756025] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:15.067 [2024-07-11 21:39:35.756036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756044] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.756051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.067 [2024-07-11 21:39:35.756068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.756122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.756138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.756143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.756154] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:15.067 [2024-07-11 21:39:35.756160] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:15.067 [2024-07-11 21:39:35.756169] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:15.067 [2024-07-11 21:39:35.756275] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:15.067 [2024-07-11 21:39:35.756280] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:15.067 [2024-07-11 21:39:35.756290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756294] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756298] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.756306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.067 [2024-07-11 21:39:35.756325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.756376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.756383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.756387] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.756397] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:15.067 [2024-07-11 21:39:35.756407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.756423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.067 [2024-07-11 21:39:35.756440] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.756504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.756516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.756520] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.756531] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:15.067 [2024-07-11 21:39:35.756536] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:15.067 [2024-07-11 21:39:35.756545] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:15.067 [2024-07-11 21:39:35.756562] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:15.067 [2024-07-11 21:39:35.756574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.756590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.067 [2024-07-11 21:39:35.756611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.756710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.067 [2024-07-11 21:39:35.756725] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.067 [2024-07-11 21:39:35.756730] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756735] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dbcd70): datao=0, datal=4096, cccid=0 00:26:15.067 [2024-07-11 21:39:35.756740] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e065f0) on tqpair(0x1dbcd70): expected_datao=0, payload_size=4096 00:26:15.067 [2024-07-11 21:39:35.756750] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756755] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.756770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.756774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.756790] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:15.067 [2024-07-11 21:39:35.756796] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:15.067 [2024-07-11 21:39:35.756800] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:15.067 [2024-07-11 21:39:35.756806] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:15.067 [2024-07-11 21:39:35.756811] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:15.067 [2024-07-11 21:39:35.756816] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:15.067 [2024-07-11 21:39:35.756830] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:15.067 [2024-07-11 21:39:35.756839] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756844] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756847] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.756855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:15.067 [2024-07-11 21:39:35.756876] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.067 [2024-07-11 21:39:35.756938] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.067 [2024-07-11 21:39:35.756945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.067 [2024-07-11 21:39:35.756949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e065f0) on tqpair=0x1dbcd70 00:26:15.067 [2024-07-11 21:39:35.756963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756971] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.756977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.067 [2024-07-11 21:39:35.756984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.067 [2024-07-11 21:39:35.756992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dbcd70) 00:26:15.067 [2024-07-11 21:39:35.756998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.068 [2024-07-11 21:39:35.757005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.068 [2024-07-11 21:39:35.757025] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757029] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.068 [2024-07-11 21:39:35.757044] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:15.068 [2024-07-11 21:39:35.757058] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:15.068 [2024-07-11 21:39:35.757066] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.068 [2024-07-11 21:39:35.757101] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e065f0, cid 0, qid 0 00:26:15.068 [2024-07-11 21:39:35.757108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06750, cid 1, qid 0 00:26:15.068 [2024-07-11 21:39:35.757113] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e068b0, cid 2, qid 0 00:26:15.068 [2024-07-11 21:39:35.757118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.068 [2024-07-11 21:39:35.757123] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06b70, cid 4, qid 0 00:26:15.068 [2024-07-11 21:39:35.757219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.068 [2024-07-11 21:39:35.757226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.068 [2024-07-11 21:39:35.757229] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757233] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06b70) on tqpair=0x1dbcd70 00:26:15.068 [2024-07-11 21:39:35.757241] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:15.068 [2024-07-11 21:39:35.757247] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:15.068 [2024-07-11 21:39:35.757258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757263] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.068 [2024-07-11 21:39:35.757291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06b70, cid 4, qid 0 00:26:15.068 [2024-07-11 21:39:35.757353] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.068 [2024-07-11 21:39:35.757365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.068 [2024-07-11 21:39:35.757369] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757373] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dbcd70): datao=0, datal=4096, cccid=4 00:26:15.068 [2024-07-11 21:39:35.757378] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06b70) on tqpair(0x1dbcd70): expected_datao=0, payload_size=4096 00:26:15.068 [2024-07-11 21:39:35.757387] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757391] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.068 [2024-07-11 21:39:35.757406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.068 [2024-07-11 21:39:35.757409] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757413] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06b70) on tqpair=0x1dbcd70 00:26:15.068 [2024-07-11 21:39:35.757429] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:15.068 [2024-07-11 21:39:35.757459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.068 [2024-07-11 21:39:35.757499] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757509] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.068 [2024-07-11 21:39:35.757542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06b70, cid 4, qid 0 00:26:15.068 [2024-07-11 21:39:35.757550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06cd0, cid 5, qid 0 00:26:15.068 [2024-07-11 21:39:35.757670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.068 [2024-07-11 21:39:35.757686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.068 [2024-07-11 21:39:35.757691] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757694] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dbcd70): datao=0, datal=1024, cccid=4 00:26:15.068 [2024-07-11 21:39:35.757699] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06b70) on tqpair(0x1dbcd70): expected_datao=0, payload_size=1024 00:26:15.068 [2024-07-11 21:39:35.757708] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757712] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.068 [2024-07-11 21:39:35.757724] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.068 [2024-07-11 21:39:35.757728] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757732] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06cd0) on tqpair=0x1dbcd70 00:26:15.068 [2024-07-11 21:39:35.757752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.068 [2024-07-11 21:39:35.757760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.068 [2024-07-11 21:39:35.757764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06b70) on tqpair=0x1dbcd70 00:26:15.068 [2024-07-11 21:39:35.757782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757787] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757791] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.068 [2024-07-11 21:39:35.757822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06b70, cid 4, qid 0 00:26:15.068 [2024-07-11 21:39:35.757892] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.068 [2024-07-11 21:39:35.757898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.068 [2024-07-11 21:39:35.757902] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757907] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dbcd70): datao=0, datal=3072, cccid=4 00:26:15.068 [2024-07-11 21:39:35.757912] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06b70) on tqpair(0x1dbcd70): expected_datao=0, payload_size=3072 00:26:15.068 [2024-07-11 21:39:35.757919] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757924] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.068 [2024-07-11 21:39:35.757938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.068 [2024-07-11 21:39:35.757942] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757946] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06b70) on tqpair=0x1dbcd70 00:26:15.068 [2024-07-11 21:39:35.757956] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757961] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.757965] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dbcd70) 00:26:15.068 [2024-07-11 21:39:35.757972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.068 [2024-07-11 21:39:35.757994] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06b70, cid 4, qid 0 00:26:15.068 [2024-07-11 21:39:35.758062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.068 [2024-07-11 21:39:35.758069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.068 [2024-07-11 21:39:35.758072] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.758076] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dbcd70): datao=0, datal=8, cccid=4 00:26:15.068 [2024-07-11 21:39:35.758081] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e06b70) on tqpair(0x1dbcd70): expected_datao=0, payload_size=8 00:26:15.068 [2024-07-11 21:39:35.758088] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.758092] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.758106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.068 [2024-07-11 21:39:35.758114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.068 [2024-07-11 21:39:35.758117] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.068 [2024-07-11 21:39:35.758121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06b70) on tqpair=0x1dbcd70 00:26:15.068 ===================================================== 00:26:15.068 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:15.068 ===================================================== 00:26:15.068 Controller Capabilities/Features 00:26:15.068 ================================ 00:26:15.068 Vendor ID: 0000 00:26:15.068 Subsystem Vendor ID: 0000 00:26:15.068 Serial Number: .................... 00:26:15.068 Model Number: ........................................ 00:26:15.068 Firmware Version: 24.01.1 00:26:15.068 Recommended Arb Burst: 0 00:26:15.068 IEEE OUI Identifier: 00 00 00 00:26:15.068 Multi-path I/O 00:26:15.068 May have multiple subsystem ports: No 00:26:15.068 May have multiple controllers: No 00:26:15.068 Associated with SR-IOV VF: No 00:26:15.068 Max Data Transfer Size: 131072 00:26:15.068 Max Number of Namespaces: 0 00:26:15.068 Max Number of I/O Queues: 1024 00:26:15.068 NVMe Specification Version (VS): 1.3 00:26:15.068 NVMe Specification Version (Identify): 1.3 00:26:15.069 Maximum Queue Entries: 128 00:26:15.069 Contiguous Queues Required: Yes 00:26:15.069 Arbitration Mechanisms Supported 00:26:15.069 Weighted Round Robin: Not Supported 00:26:15.069 Vendor Specific: Not Supported 00:26:15.069 Reset Timeout: 15000 ms 00:26:15.069 Doorbell Stride: 4 bytes 00:26:15.069 NVM Subsystem Reset: Not Supported 00:26:15.069 Command Sets Supported 00:26:15.069 NVM Command Set: Supported 00:26:15.069 Boot Partition: Not Supported 00:26:15.069 Memory Page Size Minimum: 4096 bytes 00:26:15.069 Memory Page Size Maximum: 4096 bytes 00:26:15.069 Persistent Memory Region: Not Supported 00:26:15.069 Optional Asynchronous Events Supported 00:26:15.069 Namespace Attribute Notices: Not Supported 00:26:15.069 Firmware Activation Notices: Not Supported 00:26:15.069 ANA Change Notices: Not Supported 00:26:15.069 PLE Aggregate Log Change Notices: Not Supported 00:26:15.069 LBA Status Info Alert Notices: Not Supported 00:26:15.069 EGE Aggregate Log Change Notices: Not Supported 00:26:15.069 Normal NVM Subsystem Shutdown event: Not Supported 00:26:15.069 Zone Descriptor Change Notices: Not Supported 00:26:15.069 Discovery Log Change Notices: Supported 00:26:15.069 Controller Attributes 00:26:15.069 128-bit Host Identifier: Not Supported 00:26:15.069 Non-Operational Permissive Mode: Not Supported 00:26:15.069 NVM Sets: Not Supported 00:26:15.069 Read Recovery Levels: Not Supported 00:26:15.069 Endurance Groups: Not Supported 00:26:15.069 Predictable Latency Mode: Not Supported 00:26:15.069 Traffic Based Keep ALive: Not Supported 00:26:15.069 Namespace Granularity: Not Supported 00:26:15.069 SQ Associations: Not Supported 00:26:15.069 UUID List: Not Supported 00:26:15.069 Multi-Domain Subsystem: Not Supported 00:26:15.069 Fixed Capacity Management: Not Supported 00:26:15.069 Variable Capacity Management: Not Supported 00:26:15.069 Delete Endurance Group: Not Supported 00:26:15.069 Delete NVM Set: Not Supported 00:26:15.069 Extended LBA Formats Supported: Not Supported 00:26:15.069 Flexible Data Placement Supported: Not Supported 00:26:15.069 00:26:15.069 Controller Memory Buffer Support 00:26:15.069 ================================ 00:26:15.069 Supported: No 00:26:15.069 00:26:15.069 Persistent Memory Region Support 00:26:15.069 ================================ 00:26:15.069 Supported: No 00:26:15.069 00:26:15.069 Admin Command Set Attributes 00:26:15.069 ============================ 00:26:15.069 Security Send/Receive: Not Supported 00:26:15.069 Format NVM: Not Supported 00:26:15.069 Firmware Activate/Download: Not Supported 00:26:15.069 Namespace Management: Not Supported 00:26:15.069 Device Self-Test: Not Supported 00:26:15.069 Directives: Not Supported 00:26:15.069 NVMe-MI: Not Supported 00:26:15.069 Virtualization Management: Not Supported 00:26:15.069 Doorbell Buffer Config: Not Supported 00:26:15.069 Get LBA Status Capability: Not Supported 00:26:15.069 Command & Feature Lockdown Capability: Not Supported 00:26:15.069 Abort Command Limit: 1 00:26:15.069 Async Event Request Limit: 4 00:26:15.069 Number of Firmware Slots: N/A 00:26:15.069 Firmware Slot 1 Read-Only: N/A 00:26:15.069 Firmware Activation Without Reset: N/A 00:26:15.069 Multiple Update Detection Support: N/A 00:26:15.069 Firmware Update Granularity: No Information Provided 00:26:15.069 Per-Namespace SMART Log: No 00:26:15.069 Asymmetric Namespace Access Log Page: Not Supported 00:26:15.069 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:15.069 Command Effects Log Page: Not Supported 00:26:15.069 Get Log Page Extended Data: Supported 00:26:15.069 Telemetry Log Pages: Not Supported 00:26:15.069 Persistent Event Log Pages: Not Supported 00:26:15.069 Supported Log Pages Log Page: May Support 00:26:15.069 Commands Supported & Effects Log Page: Not Supported 00:26:15.069 Feature Identifiers & Effects Log Page:May Support 00:26:15.069 NVMe-MI Commands & Effects Log Page: May Support 00:26:15.069 Data Area 4 for Telemetry Log: Not Supported 00:26:15.069 Error Log Page Entries Supported: 128 00:26:15.069 Keep Alive: Not Supported 00:26:15.069 00:26:15.069 NVM Command Set Attributes 00:26:15.069 ========================== 00:26:15.069 Submission Queue Entry Size 00:26:15.069 Max: 1 00:26:15.069 Min: 1 00:26:15.069 Completion Queue Entry Size 00:26:15.069 Max: 1 00:26:15.069 Min: 1 00:26:15.069 Number of Namespaces: 0 00:26:15.069 Compare Command: Not Supported 00:26:15.069 Write Uncorrectable Command: Not Supported 00:26:15.069 Dataset Management Command: Not Supported 00:26:15.069 Write Zeroes Command: Not Supported 00:26:15.069 Set Features Save Field: Not Supported 00:26:15.069 Reservations: Not Supported 00:26:15.069 Timestamp: Not Supported 00:26:15.069 Copy: Not Supported 00:26:15.069 Volatile Write Cache: Not Present 00:26:15.069 Atomic Write Unit (Normal): 1 00:26:15.069 Atomic Write Unit (PFail): 1 00:26:15.069 Atomic Compare & Write Unit: 1 00:26:15.069 Fused Compare & Write: Supported 00:26:15.069 Scatter-Gather List 00:26:15.069 SGL Command Set: Supported 00:26:15.069 SGL Keyed: Supported 00:26:15.069 SGL Bit Bucket Descriptor: Not Supported 00:26:15.069 SGL Metadata Pointer: Not Supported 00:26:15.069 Oversized SGL: Not Supported 00:26:15.069 SGL Metadata Address: Not Supported 00:26:15.069 SGL Offset: Supported 00:26:15.069 Transport SGL Data Block: Not Supported 00:26:15.069 Replay Protected Memory Block: Not Supported 00:26:15.069 00:26:15.069 Firmware Slot Information 00:26:15.069 ========================= 00:26:15.069 Active slot: 0 00:26:15.069 00:26:15.069 00:26:15.069 Error Log 00:26:15.069 ========= 00:26:15.069 00:26:15.069 Active Namespaces 00:26:15.069 ================= 00:26:15.069 Discovery Log Page 00:26:15.069 ================== 00:26:15.069 Generation Counter: 2 00:26:15.069 Number of Records: 2 00:26:15.069 Record Format: 0 00:26:15.069 00:26:15.069 Discovery Log Entry 0 00:26:15.069 ---------------------- 00:26:15.069 Transport Type: 3 (TCP) 00:26:15.069 Address Family: 1 (IPv4) 00:26:15.069 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:15.069 Entry Flags: 00:26:15.069 Duplicate Returned Information: 1 00:26:15.069 Explicit Persistent Connection Support for Discovery: 1 00:26:15.069 Transport Requirements: 00:26:15.069 Secure Channel: Not Required 00:26:15.069 Port ID: 0 (0x0000) 00:26:15.069 Controller ID: 65535 (0xffff) 00:26:15.069 Admin Max SQ Size: 128 00:26:15.069 Transport Service Identifier: 4420 00:26:15.069 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:15.069 Transport Address: 10.0.0.2 00:26:15.069 Discovery Log Entry 1 00:26:15.069 ---------------------- 00:26:15.069 Transport Type: 3 (TCP) 00:26:15.069 Address Family: 1 (IPv4) 00:26:15.069 Subsystem Type: 2 (NVM Subsystem) 00:26:15.069 Entry Flags: 00:26:15.069 Duplicate Returned Information: 0 00:26:15.069 Explicit Persistent Connection Support for Discovery: 0 00:26:15.069 Transport Requirements: 00:26:15.069 Secure Channel: Not Required 00:26:15.069 Port ID: 0 (0x0000) 00:26:15.069 Controller ID: 65535 (0xffff) 00:26:15.069 Admin Max SQ Size: 128 00:26:15.069 Transport Service Identifier: 4420 00:26:15.069 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:15.069 Transport Address: 10.0.0.2 [2024-07-11 21:39:35.758232] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:15.069 [2024-07-11 21:39:35.758250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.069 [2024-07-11 21:39:35.758257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.069 [2024-07-11 21:39:35.758264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.069 [2024-07-11 21:39:35.758270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.069 [2024-07-11 21:39:35.758279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.069 [2024-07-11 21:39:35.758284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.069 [2024-07-11 21:39:35.758287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.069 [2024-07-11 21:39:35.758295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.069 [2024-07-11 21:39:35.758318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.069 [2024-07-11 21:39:35.758374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.069 [2024-07-11 21:39:35.758394] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.069 [2024-07-11 21:39:35.758398] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.069 [2024-07-11 21:39:35.758402] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.069 [2024-07-11 21:39:35.758411] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.069 [2024-07-11 21:39:35.758416] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.069 [2024-07-11 21:39:35.758419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.069 [2024-07-11 21:39:35.758427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.069 [2024-07-11 21:39:35.758451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.069 [2024-07-11 21:39:35.758541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.758549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.758553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.758564] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:15.070 [2024-07-11 21:39:35.758569] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:15.070 [2024-07-11 21:39:35.758580] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.758595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.758615] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.758667] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.758674] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.758678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.758694] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758698] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.758710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.758727] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.758774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.758781] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.758785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758789] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.758800] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758805] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758808] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.758816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.758832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.758886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.758892] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.758896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758900] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.758911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.758920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.758927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.758944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.758999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.759005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.759009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759013] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.759024] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759029] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.759040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.759056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.759113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.759120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.759124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.759139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.759154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.759171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.759225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.759231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.759235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.759250] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.759266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.759282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.759333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.759340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.759344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759348] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.759359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.759374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.759391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.759448] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.759455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.759458] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.759473] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.759478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.763496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dbcd70) 00:26:15.070 [2024-07-11 21:39:35.763519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.070 [2024-07-11 21:39:35.763551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e06a10, cid 3, qid 0 00:26:15.070 [2024-07-11 21:39:35.763610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.763618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.763622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.763626] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e06a10) on tqpair=0x1dbcd70 00:26:15.070 [2024-07-11 21:39:35.763637] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:26:15.070 00:26:15.070 21:39:35 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:15.070 [2024-07-11 21:39:35.801695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:15.070 [2024-07-11 21:39:35.801745] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80554 ] 00:26:15.070 [2024-07-11 21:39:35.937831] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:15.070 [2024-07-11 21:39:35.937920] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:15.070 [2024-07-11 21:39:35.937928] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:15.070 [2024-07-11 21:39:35.937945] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:15.070 [2024-07-11 21:39:35.937960] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:26:15.070 [2024-07-11 21:39:35.938129] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:15.070 [2024-07-11 21:39:35.938188] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc0ad70 0 00:26:15.070 [2024-07-11 21:39:35.942509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:15.070 [2024-07-11 21:39:35.942538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:15.070 [2024-07-11 21:39:35.942544] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:15.070 [2024-07-11 21:39:35.942548] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:15.070 [2024-07-11 21:39:35.942597] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.942605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.942609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.070 [2024-07-11 21:39:35.942625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:15.070 [2024-07-11 21:39:35.942656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.070 [2024-07-11 21:39:35.950512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.070 [2024-07-11 21:39:35.950543] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.070 [2024-07-11 21:39:35.950549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.070 [2024-07-11 21:39:35.950554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.071 [2024-07-11 21:39:35.950569] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:15.071 [2024-07-11 21:39:35.950579] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:15.071 [2024-07-11 21:39:35.950585] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:15.071 [2024-07-11 21:39:35.950614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950620] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.071 [2024-07-11 21:39:35.950637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-07-11 21:39:35.950667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.071 [2024-07-11 21:39:35.950737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.071 [2024-07-11 21:39:35.950745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.071 [2024-07-11 21:39:35.950748] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.071 [2024-07-11 21:39:35.950759] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:15.071 [2024-07-11 21:39:35.950767] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:15.071 [2024-07-11 21:39:35.950775] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.071 [2024-07-11 21:39:35.950791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-07-11 21:39:35.950810] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.071 [2024-07-11 21:39:35.950859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.071 [2024-07-11 21:39:35.950867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.071 [2024-07-11 21:39:35.950871] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.071 [2024-07-11 21:39:35.950882] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:15.071 [2024-07-11 21:39:35.950892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:15.071 [2024-07-11 21:39:35.950900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.950908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.071 [2024-07-11 21:39:35.950916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-07-11 21:39:35.950934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.071 [2024-07-11 21:39:35.950987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.071 [2024-07-11 21:39:35.950994] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.071 [2024-07-11 21:39:35.950998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.071 [2024-07-11 21:39:35.951008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:15.071 [2024-07-11 21:39:35.951019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.071 [2024-07-11 21:39:35.951035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-07-11 21:39:35.951053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.071 [2024-07-11 21:39:35.951105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.071 [2024-07-11 21:39:35.951112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.071 [2024-07-11 21:39:35.951116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.071 [2024-07-11 21:39:35.951126] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:15.071 [2024-07-11 21:39:35.951131] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:15.071 [2024-07-11 21:39:35.951140] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:15.071 [2024-07-11 21:39:35.951246] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:15.071 [2024-07-11 21:39:35.951250] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:15.071 [2024-07-11 21:39:35.951260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951265] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951269] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.071 [2024-07-11 21:39:35.951277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-07-11 21:39:35.951295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.071 [2024-07-11 21:39:35.951351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.071 [2024-07-11 21:39:35.951358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.071 [2024-07-11 21:39:35.951362] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951367] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.071 [2024-07-11 21:39:35.951372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:15.071 [2024-07-11 21:39:35.951383] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.071 [2024-07-11 21:39:35.951391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.071 [2024-07-11 21:39:35.951399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-07-11 21:39:35.951416] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.071 [2024-07-11 21:39:35.951477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.072 [2024-07-11 21:39:35.951499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.072 [2024-07-11 21:39:35.951505] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951509] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.072 [2024-07-11 21:39:35.951515] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:15.072 [2024-07-11 21:39:35.951520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.951530] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:15.072 [2024-07-11 21:39:35.951547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.951557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.951573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-07-11 21:39:35.951594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.072 [2024-07-11 21:39:35.951701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.072 [2024-07-11 21:39:35.951709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.072 [2024-07-11 21:39:35.951713] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951718] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=4096, cccid=0 00:26:15.072 [2024-07-11 21:39:35.951723] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc545f0) on tqpair(0xc0ad70): expected_datao=0, payload_size=4096 00:26:15.072 [2024-07-11 21:39:35.951733] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951738] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951746] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.072 [2024-07-11 21:39:35.951753] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.072 [2024-07-11 21:39:35.951757] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951761] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.072 [2024-07-11 21:39:35.951770] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:15.072 [2024-07-11 21:39:35.951776] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:15.072 [2024-07-11 21:39:35.951780] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:15.072 [2024-07-11 21:39:35.951785] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:15.072 [2024-07-11 21:39:35.951790] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:15.072 [2024-07-11 21:39:35.951796] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.951810] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.951818] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.951835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:15.072 [2024-07-11 21:39:35.951855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.072 [2024-07-11 21:39:35.951906] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.072 [2024-07-11 21:39:35.951913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.072 [2024-07-11 21:39:35.951917] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc545f0) on tqpair=0xc0ad70 00:26:15.072 [2024-07-11 21:39:35.951929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951933] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.951944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.072 [2024-07-11 21:39:35.951951] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951955] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951959] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.951965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.072 [2024-07-11 21:39:35.951972] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951976] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951980] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.951986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.072 [2024-07-11 21:39:35.951992] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.951996] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.952006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.072 [2024-07-11 21:39:35.952011] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952024] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.952048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-07-11 21:39:35.952068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc545f0, cid 0, qid 0 00:26:15.072 [2024-07-11 21:39:35.952075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54750, cid 1, qid 0 00:26:15.072 [2024-07-11 21:39:35.952080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc548b0, cid 2, qid 0 00:26:15.072 [2024-07-11 21:39:35.952085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.072 [2024-07-11 21:39:35.952090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54b70, cid 4, qid 0 00:26:15.072 [2024-07-11 21:39:35.952185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.072 [2024-07-11 21:39:35.952192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.072 [2024-07-11 21:39:35.952196] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952200] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54b70) on tqpair=0xc0ad70 00:26:15.072 [2024-07-11 21:39:35.952206] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:15.072 [2024-07-11 21:39:35.952211] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952220] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952231] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952243] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.952254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:15.072 [2024-07-11 21:39:35.952272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54b70, cid 4, qid 0 00:26:15.072 [2024-07-11 21:39:35.952334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.072 [2024-07-11 21:39:35.952341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.072 [2024-07-11 21:39:35.952345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54b70) on tqpair=0xc0ad70 00:26:15.072 [2024-07-11 21:39:35.952412] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952423] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952435] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc0ad70) 00:26:15.072 [2024-07-11 21:39:35.952447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-07-11 21:39:35.952466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54b70, cid 4, qid 0 00:26:15.072 [2024-07-11 21:39:35.952549] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.072 [2024-07-11 21:39:35.952558] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.072 [2024-07-11 21:39:35.952562] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952566] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=4096, cccid=4 00:26:15.072 [2024-07-11 21:39:35.952571] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc54b70) on tqpair(0xc0ad70): expected_datao=0, payload_size=4096 00:26:15.072 [2024-07-11 21:39:35.952579] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952583] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952592] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.072 [2024-07-11 21:39:35.952598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.072 [2024-07-11 21:39:35.952602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952606] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54b70) on tqpair=0xc0ad70 00:26:15.072 [2024-07-11 21:39:35.952621] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:15.072 [2024-07-11 21:39:35.952633] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952644] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:15.072 [2024-07-11 21:39:35.952653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952657] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.072 [2024-07-11 21:39:35.952661] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.952668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.952689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54b70, cid 4, qid 0 00:26:15.073 [2024-07-11 21:39:35.952764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.073 [2024-07-11 21:39:35.952777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.073 [2024-07-11 21:39:35.952782] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952786] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=4096, cccid=4 00:26:15.073 [2024-07-11 21:39:35.952791] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc54b70) on tqpair(0xc0ad70): expected_datao=0, payload_size=4096 00:26:15.073 [2024-07-11 21:39:35.952799] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952803] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.073 [2024-07-11 21:39:35.952818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.073 [2024-07-11 21:39:35.952822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952826] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54b70) on tqpair=0xc0ad70 00:26:15.073 [2024-07-11 21:39:35.952843] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.952855] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.952863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.952879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.952899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54b70, cid 4, qid 0 00:26:15.073 [2024-07-11 21:39:35.952959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.073 [2024-07-11 21:39:35.952966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.073 [2024-07-11 21:39:35.952970] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952974] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=4096, cccid=4 00:26:15.073 [2024-07-11 21:39:35.952978] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc54b70) on tqpair(0xc0ad70): expected_datao=0, payload_size=4096 00:26:15.073 [2024-07-11 21:39:35.952986] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952990] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.952999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.073 [2024-07-11 21:39:35.953005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.073 [2024-07-11 21:39:35.953009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953013] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54b70) on tqpair=0xc0ad70 00:26:15.073 [2024-07-11 21:39:35.953022] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.953031] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.953042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.953050] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.953056] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.953061] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:15.073 [2024-07-11 21:39:35.953066] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:15.073 [2024-07-11 21:39:35.953072] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:15.073 [2024-07-11 21:39:35.953091] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.953108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.953116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.953131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.073 [2024-07-11 21:39:35.953160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54b70, cid 4, qid 0 00:26:15.073 [2024-07-11 21:39:35.953168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54cd0, cid 5, qid 0 00:26:15.073 [2024-07-11 21:39:35.953236] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.073 [2024-07-11 21:39:35.953243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.073 [2024-07-11 21:39:35.953247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953251] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54b70) on tqpair=0xc0ad70 00:26:15.073 [2024-07-11 21:39:35.953258] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.073 [2024-07-11 21:39:35.953264] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.073 [2024-07-11 21:39:35.953268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54cd0) on tqpair=0xc0ad70 00:26:15.073 [2024-07-11 21:39:35.953283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953287] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.953298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.953317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54cd0, cid 5, qid 0 00:26:15.073 [2024-07-11 21:39:35.953369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.073 [2024-07-11 21:39:35.953376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.073 [2024-07-11 21:39:35.953380] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54cd0) on tqpair=0xc0ad70 00:26:15.073 [2024-07-11 21:39:35.953395] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953400] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.953404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.953411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.953429] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54cd0, cid 5, qid 0 00:26:15.073 [2024-07-11 21:39:35.957502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.073 [2024-07-11 21:39:35.957521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.073 [2024-07-11 21:39:35.957526] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957531] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54cd0) on tqpair=0xc0ad70 00:26:15.073 [2024-07-11 21:39:35.957547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.957566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.957595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54cd0, cid 5, qid 0 00:26:15.073 [2024-07-11 21:39:35.957658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.073 [2024-07-11 21:39:35.957666] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.073 [2024-07-11 21:39:35.957669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54cd0) on tqpair=0xc0ad70 00:26:15.073 [2024-07-11 21:39:35.957689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.957706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.957714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.957728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.957737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957741] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.957751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.957760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.957768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc0ad70) 00:26:15.073 [2024-07-11 21:39:35.957774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.073 [2024-07-11 21:39:35.957796] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54cd0, cid 5, qid 0 00:26:15.073 [2024-07-11 21:39:35.957804] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54b70, cid 4, qid 0 00:26:15.073 [2024-07-11 21:39:35.957809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54e30, cid 6, qid 0 00:26:15.073 [2024-07-11 21:39:35.957814] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54f90, cid 7, qid 0 00:26:15.073 [2024-07-11 21:39:35.957987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.073 [2024-07-11 21:39:35.957994] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.073 [2024-07-11 21:39:35.957998] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.073 [2024-07-11 21:39:35.958002] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=8192, cccid=5 00:26:15.074 [2024-07-11 21:39:35.958007] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc54cd0) on tqpair(0xc0ad70): expected_datao=0, payload_size=8192 00:26:15.074 [2024-07-11 21:39:35.958025] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958031] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.074 [2024-07-11 21:39:35.958043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.074 [2024-07-11 21:39:35.958046] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958050] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=512, cccid=4 00:26:15.074 [2024-07-11 21:39:35.958055] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc54b70) on tqpair(0xc0ad70): expected_datao=0, payload_size=512 00:26:15.074 [2024-07-11 21:39:35.958062] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958066] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.074 [2024-07-11 21:39:35.958078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.074 [2024-07-11 21:39:35.958082] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958086] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=512, cccid=6 00:26:15.074 [2024-07-11 21:39:35.958090] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc54e30) on tqpair(0xc0ad70): expected_datao=0, payload_size=512 00:26:15.074 [2024-07-11 21:39:35.958098] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958101] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958107] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:15.074 [2024-07-11 21:39:35.958113] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:15.074 [2024-07-11 21:39:35.958117] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958120] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc0ad70): datao=0, datal=4096, cccid=7 00:26:15.074 [2024-07-11 21:39:35.958125] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc54f90) on tqpair(0xc0ad70): expected_datao=0, payload_size=4096 00:26:15.074 [2024-07-11 21:39:35.958133] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958137] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.074 [2024-07-11 21:39:35.958152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.074 [2024-07-11 21:39:35.958155] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958159] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54cd0) on tqpair=0xc0ad70 00:26:15.074 [2024-07-11 21:39:35.958180] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.074 [2024-07-11 21:39:35.958187] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.074 [2024-07-11 21:39:35.958190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54b70) on tqpair=0xc0ad70 00:26:15.074 [2024-07-11 21:39:35.958205] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.074 [2024-07-11 21:39:35.958211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.074 [2024-07-11 21:39:35.958215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54e30) on tqpair=0xc0ad70 00:26:15.074 [2024-07-11 21:39:35.958226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.074 [2024-07-11 21:39:35.958232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.074 [2024-07-11 21:39:35.958236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.074 [2024-07-11 21:39:35.958240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54f90) on tqpair=0xc0ad70 00:26:15.074 ===================================================== 00:26:15.074 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:15.074 ===================================================== 00:26:15.074 Controller Capabilities/Features 00:26:15.074 ================================ 00:26:15.074 Vendor ID: 8086 00:26:15.074 Subsystem Vendor ID: 8086 00:26:15.074 Serial Number: SPDK00000000000001 00:26:15.074 Model Number: SPDK bdev Controller 00:26:15.074 Firmware Version: 24.01.1 00:26:15.074 Recommended Arb Burst: 6 00:26:15.074 IEEE OUI Identifier: e4 d2 5c 00:26:15.074 Multi-path I/O 00:26:15.074 May have multiple subsystem ports: Yes 00:26:15.074 May have multiple controllers: Yes 00:26:15.074 Associated with SR-IOV VF: No 00:26:15.074 Max Data Transfer Size: 131072 00:26:15.074 Max Number of Namespaces: 32 00:26:15.074 Max Number of I/O Queues: 127 00:26:15.074 NVMe Specification Version (VS): 1.3 00:26:15.074 NVMe Specification Version (Identify): 1.3 00:26:15.074 Maximum Queue Entries: 128 00:26:15.074 Contiguous Queues Required: Yes 00:26:15.074 Arbitration Mechanisms Supported 00:26:15.074 Weighted Round Robin: Not Supported 00:26:15.074 Vendor Specific: Not Supported 00:26:15.074 Reset Timeout: 15000 ms 00:26:15.074 Doorbell Stride: 4 bytes 00:26:15.074 NVM Subsystem Reset: Not Supported 00:26:15.074 Command Sets Supported 00:26:15.074 NVM Command Set: Supported 00:26:15.074 Boot Partition: Not Supported 00:26:15.074 Memory Page Size Minimum: 4096 bytes 00:26:15.074 Memory Page Size Maximum: 4096 bytes 00:26:15.074 Persistent Memory Region: Not Supported 00:26:15.074 Optional Asynchronous Events Supported 00:26:15.074 Namespace Attribute Notices: Supported 00:26:15.074 Firmware Activation Notices: Not Supported 00:26:15.074 ANA Change Notices: Not Supported 00:26:15.074 PLE Aggregate Log Change Notices: Not Supported 00:26:15.074 LBA Status Info Alert Notices: Not Supported 00:26:15.074 EGE Aggregate Log Change Notices: Not Supported 00:26:15.074 Normal NVM Subsystem Shutdown event: Not Supported 00:26:15.074 Zone Descriptor Change Notices: Not Supported 00:26:15.074 Discovery Log Change Notices: Not Supported 00:26:15.074 Controller Attributes 00:26:15.074 128-bit Host Identifier: Supported 00:26:15.074 Non-Operational Permissive Mode: Not Supported 00:26:15.074 NVM Sets: Not Supported 00:26:15.074 Read Recovery Levels: Not Supported 00:26:15.074 Endurance Groups: Not Supported 00:26:15.074 Predictable Latency Mode: Not Supported 00:26:15.074 Traffic Based Keep ALive: Not Supported 00:26:15.074 Namespace Granularity: Not Supported 00:26:15.074 SQ Associations: Not Supported 00:26:15.074 UUID List: Not Supported 00:26:15.074 Multi-Domain Subsystem: Not Supported 00:26:15.074 Fixed Capacity Management: Not Supported 00:26:15.074 Variable Capacity Management: Not Supported 00:26:15.074 Delete Endurance Group: Not Supported 00:26:15.074 Delete NVM Set: Not Supported 00:26:15.074 Extended LBA Formats Supported: Not Supported 00:26:15.074 Flexible Data Placement Supported: Not Supported 00:26:15.074 00:26:15.074 Controller Memory Buffer Support 00:26:15.074 ================================ 00:26:15.074 Supported: No 00:26:15.074 00:26:15.074 Persistent Memory Region Support 00:26:15.074 ================================ 00:26:15.074 Supported: No 00:26:15.074 00:26:15.074 Admin Command Set Attributes 00:26:15.074 ============================ 00:26:15.074 Security Send/Receive: Not Supported 00:26:15.074 Format NVM: Not Supported 00:26:15.074 Firmware Activate/Download: Not Supported 00:26:15.074 Namespace Management: Not Supported 00:26:15.074 Device Self-Test: Not Supported 00:26:15.074 Directives: Not Supported 00:26:15.074 NVMe-MI: Not Supported 00:26:15.074 Virtualization Management: Not Supported 00:26:15.074 Doorbell Buffer Config: Not Supported 00:26:15.074 Get LBA Status Capability: Not Supported 00:26:15.074 Command & Feature Lockdown Capability: Not Supported 00:26:15.074 Abort Command Limit: 4 00:26:15.074 Async Event Request Limit: 4 00:26:15.074 Number of Firmware Slots: N/A 00:26:15.074 Firmware Slot 1 Read-Only: N/A 00:26:15.074 Firmware Activation Without Reset: N/A 00:26:15.074 Multiple Update Detection Support: N/A 00:26:15.074 Firmware Update Granularity: No Information Provided 00:26:15.074 Per-Namespace SMART Log: No 00:26:15.074 Asymmetric Namespace Access Log Page: Not Supported 00:26:15.074 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:15.074 Command Effects Log Page: Supported 00:26:15.074 Get Log Page Extended Data: Supported 00:26:15.074 Telemetry Log Pages: Not Supported 00:26:15.074 Persistent Event Log Pages: Not Supported 00:26:15.074 Supported Log Pages Log Page: May Support 00:26:15.074 Commands Supported & Effects Log Page: Not Supported 00:26:15.074 Feature Identifiers & Effects Log Page:May Support 00:26:15.074 NVMe-MI Commands & Effects Log Page: May Support 00:26:15.074 Data Area 4 for Telemetry Log: Not Supported 00:26:15.074 Error Log Page Entries Supported: 128 00:26:15.074 Keep Alive: Supported 00:26:15.074 Keep Alive Granularity: 10000 ms 00:26:15.074 00:26:15.074 NVM Command Set Attributes 00:26:15.074 ========================== 00:26:15.074 Submission Queue Entry Size 00:26:15.074 Max: 64 00:26:15.074 Min: 64 00:26:15.074 Completion Queue Entry Size 00:26:15.074 Max: 16 00:26:15.074 Min: 16 00:26:15.074 Number of Namespaces: 32 00:26:15.074 Compare Command: Supported 00:26:15.074 Write Uncorrectable Command: Not Supported 00:26:15.074 Dataset Management Command: Supported 00:26:15.074 Write Zeroes Command: Supported 00:26:15.074 Set Features Save Field: Not Supported 00:26:15.074 Reservations: Supported 00:26:15.074 Timestamp: Not Supported 00:26:15.074 Copy: Supported 00:26:15.074 Volatile Write Cache: Present 00:26:15.074 Atomic Write Unit (Normal): 1 00:26:15.074 Atomic Write Unit (PFail): 1 00:26:15.074 Atomic Compare & Write Unit: 1 00:26:15.074 Fused Compare & Write: Supported 00:26:15.074 Scatter-Gather List 00:26:15.074 SGL Command Set: Supported 00:26:15.074 SGL Keyed: Supported 00:26:15.074 SGL Bit Bucket Descriptor: Not Supported 00:26:15.074 SGL Metadata Pointer: Not Supported 00:26:15.074 Oversized SGL: Not Supported 00:26:15.075 SGL Metadata Address: Not Supported 00:26:15.075 SGL Offset: Supported 00:26:15.075 Transport SGL Data Block: Not Supported 00:26:15.075 Replay Protected Memory Block: Not Supported 00:26:15.075 00:26:15.075 Firmware Slot Information 00:26:15.075 ========================= 00:26:15.075 Active slot: 1 00:26:15.075 Slot 1 Firmware Revision: 24.01.1 00:26:15.075 00:26:15.075 00:26:15.075 Commands Supported and Effects 00:26:15.075 ============================== 00:26:15.075 Admin Commands 00:26:15.075 -------------- 00:26:15.075 Get Log Page (02h): Supported 00:26:15.075 Identify (06h): Supported 00:26:15.075 Abort (08h): Supported 00:26:15.075 Set Features (09h): Supported 00:26:15.075 Get Features (0Ah): Supported 00:26:15.075 Asynchronous Event Request (0Ch): Supported 00:26:15.075 Keep Alive (18h): Supported 00:26:15.075 I/O Commands 00:26:15.075 ------------ 00:26:15.075 Flush (00h): Supported LBA-Change 00:26:15.075 Write (01h): Supported LBA-Change 00:26:15.075 Read (02h): Supported 00:26:15.075 Compare (05h): Supported 00:26:15.075 Write Zeroes (08h): Supported LBA-Change 00:26:15.075 Dataset Management (09h): Supported LBA-Change 00:26:15.075 Copy (19h): Supported LBA-Change 00:26:15.075 Unknown (79h): Supported LBA-Change 00:26:15.075 Unknown (7Ah): Supported 00:26:15.075 00:26:15.075 Error Log 00:26:15.075 ========= 00:26:15.075 00:26:15.075 Arbitration 00:26:15.075 =========== 00:26:15.075 Arbitration Burst: 1 00:26:15.075 00:26:15.075 Power Management 00:26:15.075 ================ 00:26:15.075 Number of Power States: 1 00:26:15.075 Current Power State: Power State #0 00:26:15.075 Power State #0: 00:26:15.075 Max Power: 0.00 W 00:26:15.075 Non-Operational State: Operational 00:26:15.075 Entry Latency: Not Reported 00:26:15.075 Exit Latency: Not Reported 00:26:15.075 Relative Read Throughput: 0 00:26:15.075 Relative Read Latency: 0 00:26:15.075 Relative Write Throughput: 0 00:26:15.075 Relative Write Latency: 0 00:26:15.075 Idle Power: Not Reported 00:26:15.075 Active Power: Not Reported 00:26:15.075 Non-Operational Permissive Mode: Not Supported 00:26:15.075 00:26:15.075 Health Information 00:26:15.075 ================== 00:26:15.075 Critical Warnings: 00:26:15.075 Available Spare Space: OK 00:26:15.075 Temperature: OK 00:26:15.075 Device Reliability: OK 00:26:15.075 Read Only: No 00:26:15.075 Volatile Memory Backup: OK 00:26:15.075 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:15.075 Temperature Threshold: [2024-07-11 21:39:35.958361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958368] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958373] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc0ad70) 00:26:15.075 [2024-07-11 21:39:35.958393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.075 [2024-07-11 21:39:35.958419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54f90, cid 7, qid 0 00:26:15.075 [2024-07-11 21:39:35.958473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.075 [2024-07-11 21:39:35.958481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.075 [2024-07-11 21:39:35.958501] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54f90) on tqpair=0xc0ad70 00:26:15.075 [2024-07-11 21:39:35.958547] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:15.075 [2024-07-11 21:39:35.958563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.075 [2024-07-11 21:39:35.958571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.075 [2024-07-11 21:39:35.958577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.075 [2024-07-11 21:39:35.958584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.075 [2024-07-11 21:39:35.958594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958598] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.075 [2024-07-11 21:39:35.958610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.075 [2024-07-11 21:39:35.958634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.075 [2024-07-11 21:39:35.958690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.075 [2024-07-11 21:39:35.958697] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.075 [2024-07-11 21:39:35.958701] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.075 [2024-07-11 21:39:35.958714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.075 [2024-07-11 21:39:35.958729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.075 [2024-07-11 21:39:35.958750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.075 [2024-07-11 21:39:35.958831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.075 [2024-07-11 21:39:35.958838] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.075 [2024-07-11 21:39:35.958842] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.075 [2024-07-11 21:39:35.958851] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:15.075 [2024-07-11 21:39:35.958856] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:15.075 [2024-07-11 21:39:35.958867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958872] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.075 [2024-07-11 21:39:35.958883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.075 [2024-07-11 21:39:35.958900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.075 [2024-07-11 21:39:35.958956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.075 [2024-07-11 21:39:35.958963] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.075 [2024-07-11 21:39:35.958967] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958971] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.075 [2024-07-11 21:39:35.958982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.958991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.075 [2024-07-11 21:39:35.958998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.075 [2024-07-11 21:39:35.959016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.075 [2024-07-11 21:39:35.959071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.075 [2024-07-11 21:39:35.959078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.075 [2024-07-11 21:39:35.959082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.959086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.075 [2024-07-11 21:39:35.959097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.959101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.959105] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.075 [2024-07-11 21:39:35.959112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.075 [2024-07-11 21:39:35.959130] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.075 [2024-07-11 21:39:35.959178] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.075 [2024-07-11 21:39:35.959185] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.075 [2024-07-11 21:39:35.959189] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.959193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.075 [2024-07-11 21:39:35.959204] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.075 [2024-07-11 21:39:35.959208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959212] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.959219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.959237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.959299] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.959306] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.959310] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959314] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.959325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.959341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.959359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.959416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.959424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.959427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.959442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.959458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.959475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.959550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.959559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.959563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.959578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.959594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.959613] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.959665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.959672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.959675] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959679] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.959690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.959706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.959723] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.959780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.959787] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.959791] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959795] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.959806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.959821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.959838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.959897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.959904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.959908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.959925] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.959934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.959941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.959958] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.960010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.960019] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.960025] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960032] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.960049] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960057] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960063] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.960071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.960092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.960141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.960149] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.960153] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960157] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.960167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960172] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.960183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.960200] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.960252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.960258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.960262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.960277] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960281] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.960292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.960310] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.960364] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.960371] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.960375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.960389] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.960405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.960422] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.960474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.960510] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.960518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960523] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.960535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960540] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.960551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.960575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.960625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.960632] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.960636] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.076 [2024-07-11 21:39:35.960651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960656] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.076 [2024-07-11 21:39:35.960667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.076 [2024-07-11 21:39:35.960685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.076 [2024-07-11 21:39:35.960739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.076 [2024-07-11 21:39:35.960746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.076 [2024-07-11 21:39:35.960749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.076 [2024-07-11 21:39:35.960754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.960764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.960780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.960797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.960848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.960855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.960859] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.960873] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960878] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.960889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.960906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.960957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.960964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.960968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.960982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.960991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.960998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961204] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961296] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961303] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961319] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961402] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961405] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961420] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961648] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961658] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961663] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961667] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961775] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961857] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961872] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961883] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961891] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.961898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.961916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.961964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.961971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.961975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.961989] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.961998] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.962005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.962023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.962074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.962081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.962085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.962089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.077 [2024-07-11 21:39:35.962100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.962104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.077 [2024-07-11 21:39:35.962108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.077 [2024-07-11 21:39:35.962115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.077 [2024-07-11 21:39:35.962133] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.077 [2024-07-11 21:39:35.962191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.077 [2024-07-11 21:39:35.962199] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.077 [2024-07-11 21:39:35.962202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.962207] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.078 [2024-07-11 21:39:35.962217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.962223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.962226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.078 [2024-07-11 21:39:35.962234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.078 [2024-07-11 21:39:35.962261] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.078 [2024-07-11 21:39:35.962310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.078 [2024-07-11 21:39:35.962327] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.078 [2024-07-11 21:39:35.962332] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.962336] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.078 [2024-07-11 21:39:35.962348] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.962353] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.962357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.078 [2024-07-11 21:39:35.962364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.078 [2024-07-11 21:39:35.962397] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.078 [2024-07-11 21:39:35.962447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.078 [2024-07-11 21:39:35.962462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.078 [2024-07-11 21:39:35.962467] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.962471] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.078 [2024-07-11 21:39:35.966498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.966519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.966524] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc0ad70) 00:26:15.078 [2024-07-11 21:39:35.966533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.078 [2024-07-11 21:39:35.966560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc54a10, cid 3, qid 0 00:26:15.078 [2024-07-11 21:39:35.966621] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:15.078 [2024-07-11 21:39:35.966629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:15.078 [2024-07-11 21:39:35.966633] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:15.078 [2024-07-11 21:39:35.966637] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc54a10) on tqpair=0xc0ad70 00:26:15.078 [2024-07-11 21:39:35.966647] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:15.078 0 Kelvin (-273 Celsius) 00:26:15.078 Available Spare: 0% 00:26:15.078 Available Spare Threshold: 0% 00:26:15.078 Life Percentage Used: 0% 00:26:15.078 Data Units Read: 0 00:26:15.078 Data Units Written: 0 00:26:15.078 Host Read Commands: 0 00:26:15.078 Host Write Commands: 0 00:26:15.078 Controller Busy Time: 0 minutes 00:26:15.078 Power Cycles: 0 00:26:15.078 Power On Hours: 0 hours 00:26:15.078 Unsafe Shutdowns: 0 00:26:15.078 Unrecoverable Media Errors: 0 00:26:15.078 Lifetime Error Log Entries: 0 00:26:15.078 Warning Temperature Time: 0 minutes 00:26:15.078 Critical Temperature Time: 0 minutes 00:26:15.078 00:26:15.078 Number of Queues 00:26:15.078 ================ 00:26:15.078 Number of I/O Submission Queues: 127 00:26:15.078 Number of I/O Completion Queues: 127 00:26:15.078 00:26:15.078 Active Namespaces 00:26:15.078 ================= 00:26:15.078 Namespace ID:1 00:26:15.078 Error Recovery Timeout: Unlimited 00:26:15.078 Command Set Identifier: NVM (00h) 00:26:15.078 Deallocate: Supported 00:26:15.078 Deallocated/Unwritten Error: Not Supported 00:26:15.078 Deallocated Read Value: Unknown 00:26:15.078 Deallocate in Write Zeroes: Not Supported 00:26:15.078 Deallocated Guard Field: 0xFFFF 00:26:15.078 Flush: Supported 00:26:15.078 Reservation: Supported 00:26:15.078 Namespace Sharing Capabilities: Multiple Controllers 00:26:15.078 Size (in LBAs): 131072 (0GiB) 00:26:15.078 Capacity (in LBAs): 131072 (0GiB) 00:26:15.078 Utilization (in LBAs): 131072 (0GiB) 00:26:15.078 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:15.078 EUI64: ABCDEF0123456789 00:26:15.078 UUID: 77cf0757-62dd-45b1-9f83-d8bebab4bf79 00:26:15.078 Thin Provisioning: Not Supported 00:26:15.078 Per-NS Atomic Units: Yes 00:26:15.078 Atomic Boundary Size (Normal): 0 00:26:15.078 Atomic Boundary Size (PFail): 0 00:26:15.078 Atomic Boundary Offset: 0 00:26:15.078 Maximum Single Source Range Length: 65535 00:26:15.078 Maximum Copy Length: 65535 00:26:15.078 Maximum Source Range Count: 1 00:26:15.078 NGUID/EUI64 Never Reused: No 00:26:15.078 Namespace Write Protected: No 00:26:15.078 Number of LBA Formats: 1 00:26:15.078 Current LBA Format: LBA Format #00 00:26:15.078 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:15.078 00:26:15.078 21:39:35 -- host/identify.sh@51 -- # sync 00:26:15.337 21:39:36 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.337 21:39:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:15.337 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:26:15.337 21:39:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:15.337 21:39:36 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:15.337 21:39:36 -- host/identify.sh@56 -- # nvmftestfini 00:26:15.337 21:39:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:15.337 21:39:36 -- nvmf/common.sh@116 -- # sync 00:26:15.337 21:39:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:15.337 21:39:36 -- nvmf/common.sh@119 -- # set +e 00:26:15.337 21:39:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:15.337 21:39:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:15.337 rmmod nvme_tcp 00:26:15.337 rmmod nvme_fabrics 00:26:15.337 rmmod nvme_keyring 00:26:15.337 21:39:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:15.337 21:39:36 -- nvmf/common.sh@123 -- # set -e 00:26:15.337 21:39:36 -- nvmf/common.sh@124 -- # return 0 00:26:15.337 21:39:36 -- nvmf/common.sh@477 -- # '[' -n 80517 ']' 00:26:15.337 21:39:36 -- nvmf/common.sh@478 -- # killprocess 80517 00:26:15.337 21:39:36 -- common/autotest_common.sh@926 -- # '[' -z 80517 ']' 00:26:15.337 21:39:36 -- common/autotest_common.sh@930 -- # kill -0 80517 00:26:15.337 21:39:36 -- common/autotest_common.sh@931 -- # uname 00:26:15.337 21:39:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:15.337 21:39:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80517 00:26:15.337 killing process with pid 80517 00:26:15.337 21:39:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:15.337 21:39:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:15.337 21:39:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80517' 00:26:15.337 21:39:36 -- common/autotest_common.sh@945 -- # kill 80517 00:26:15.337 [2024-07-11 21:39:36.130325] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:15.337 21:39:36 -- common/autotest_common.sh@950 -- # wait 80517 00:26:15.595 21:39:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:15.595 21:39:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:15.595 21:39:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:15.595 21:39:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.595 21:39:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:15.595 21:39:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.595 21:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.595 21:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.595 21:39:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:15.595 00:26:15.595 real 0m2.462s 00:26:15.595 user 0m6.802s 00:26:15.595 sys 0m0.642s 00:26:15.595 21:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.595 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:26:15.595 ************************************ 00:26:15.595 END TEST nvmf_identify 00:26:15.595 ************************************ 00:26:15.595 21:39:36 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:15.595 21:39:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:15.595 21:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:15.595 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:26:15.595 ************************************ 00:26:15.595 START TEST nvmf_perf 00:26:15.595 ************************************ 00:26:15.595 21:39:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:15.595 * Looking for test storage... 00:26:15.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:15.595 21:39:36 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:15.595 21:39:36 -- nvmf/common.sh@7 -- # uname -s 00:26:15.595 21:39:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.595 21:39:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.595 21:39:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.595 21:39:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.595 21:39:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.595 21:39:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.595 21:39:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.595 21:39:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.595 21:39:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.595 21:39:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.854 21:39:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:26:15.854 21:39:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:26:15.854 21:39:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.854 21:39:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.854 21:39:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:15.854 21:39:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:15.854 21:39:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.854 21:39:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.854 21:39:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.854 21:39:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.854 21:39:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.854 21:39:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.854 21:39:36 -- paths/export.sh@5 -- # export PATH 00:26:15.854 21:39:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.854 21:39:36 -- nvmf/common.sh@46 -- # : 0 00:26:15.854 21:39:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:15.854 21:39:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:15.854 21:39:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:15.854 21:39:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.854 21:39:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.854 21:39:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:15.854 21:39:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:15.854 21:39:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:15.854 21:39:36 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:15.854 21:39:36 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:15.854 21:39:36 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:15.854 21:39:36 -- host/perf.sh@17 -- # nvmftestinit 00:26:15.854 21:39:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:15.854 21:39:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.854 21:39:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:15.854 21:39:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:15.854 21:39:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:15.854 21:39:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.854 21:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.854 21:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.854 21:39:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:15.854 21:39:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:15.854 21:39:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:15.854 21:39:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:15.854 21:39:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:15.854 21:39:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:15.854 21:39:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.854 21:39:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.854 21:39:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:15.854 21:39:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:15.854 21:39:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:15.854 21:39:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:15.854 21:39:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:15.854 21:39:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.854 21:39:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:15.854 21:39:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:15.854 21:39:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:15.854 21:39:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:15.854 21:39:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:15.854 21:39:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:15.854 Cannot find device "nvmf_tgt_br" 00:26:15.854 21:39:36 -- nvmf/common.sh@154 -- # true 00:26:15.854 21:39:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:15.854 Cannot find device "nvmf_tgt_br2" 00:26:15.854 21:39:36 -- nvmf/common.sh@155 -- # true 00:26:15.854 21:39:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:15.854 21:39:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:15.854 Cannot find device "nvmf_tgt_br" 00:26:15.854 21:39:36 -- nvmf/common.sh@157 -- # true 00:26:15.854 21:39:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:15.854 Cannot find device "nvmf_tgt_br2" 00:26:15.854 21:39:36 -- nvmf/common.sh@158 -- # true 00:26:15.854 21:39:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:15.854 21:39:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:15.854 21:39:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:15.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:15.854 21:39:36 -- nvmf/common.sh@161 -- # true 00:26:15.854 21:39:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:15.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:15.854 21:39:36 -- nvmf/common.sh@162 -- # true 00:26:15.854 21:39:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:15.854 21:39:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:15.854 21:39:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:15.854 21:39:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:15.854 21:39:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:15.854 21:39:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:15.854 21:39:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:15.854 21:39:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:15.854 21:39:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:15.854 21:39:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:15.854 21:39:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:15.854 21:39:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:15.854 21:39:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:15.854 21:39:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:15.854 21:39:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:16.113 21:39:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:16.113 21:39:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:16.113 21:39:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:16.113 21:39:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:16.113 21:39:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:16.113 21:39:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:16.113 21:39:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:16.113 21:39:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:16.113 21:39:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:16.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:26:16.113 00:26:16.113 --- 10.0.0.2 ping statistics --- 00:26:16.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.113 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:16.113 21:39:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:16.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:16.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:26:16.113 00:26:16.113 --- 10.0.0.3 ping statistics --- 00:26:16.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.113 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:16.113 21:39:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:16.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:26:16.113 00:26:16.113 --- 10.0.0.1 ping statistics --- 00:26:16.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.113 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:26:16.113 21:39:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.113 21:39:36 -- nvmf/common.sh@421 -- # return 0 00:26:16.113 21:39:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:16.113 21:39:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.113 21:39:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:16.113 21:39:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:16.113 21:39:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.113 21:39:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:16.113 21:39:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:16.113 21:39:36 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:16.113 21:39:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:16.113 21:39:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:16.113 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:26:16.113 21:39:36 -- nvmf/common.sh@469 -- # nvmfpid=80718 00:26:16.113 21:39:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:16.113 21:39:36 -- nvmf/common.sh@470 -- # waitforlisten 80718 00:26:16.113 21:39:36 -- common/autotest_common.sh@819 -- # '[' -z 80718 ']' 00:26:16.113 21:39:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.113 21:39:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:16.113 21:39:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.113 21:39:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:16.113 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:26:16.113 [2024-07-11 21:39:36.957775] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:16.113 [2024-07-11 21:39:36.958128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.371 [2024-07-11 21:39:37.101374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.371 [2024-07-11 21:39:37.198955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:16.371 [2024-07-11 21:39:37.199104] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.371 [2024-07-11 21:39:37.199117] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.371 [2024-07-11 21:39:37.199127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.371 [2024-07-11 21:39:37.199300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.371 [2024-07-11 21:39:37.200001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.371 [2024-07-11 21:39:37.200085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.371 [2024-07-11 21:39:37.200086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.937 21:39:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:16.937 21:39:37 -- common/autotest_common.sh@852 -- # return 0 00:26:16.937 21:39:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:16.937 21:39:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:16.937 21:39:37 -- common/autotest_common.sh@10 -- # set +x 00:26:17.194 21:39:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.194 21:39:37 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:17.194 21:39:37 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:17.452 21:39:38 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:26:17.452 21:39:38 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:17.711 21:39:38 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:26:17.711 21:39:38 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:18.294 21:39:38 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:18.294 21:39:38 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:26:18.294 21:39:38 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:18.294 21:39:38 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:18.294 21:39:38 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:18.294 [2024-07-11 21:39:39.144869] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.294 21:39:39 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.551 21:39:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:18.551 21:39:39 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:18.808 21:39:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:18.808 21:39:39 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:19.065 21:39:39 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.323 [2024-07-11 21:39:40.106289] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.323 21:39:40 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:19.581 21:39:40 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:26:19.581 21:39:40 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:19.581 21:39:40 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:19.581 21:39:40 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:20.515 Initializing NVMe Controllers 00:26:20.515 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:20.515 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:26:20.515 Initialization complete. Launching workers. 00:26:20.515 ======================================================== 00:26:20.515 Latency(us) 00:26:20.515 Device Information : IOPS MiB/s Average min max 00:26:20.515 PCIE (0000:00:06.0) NSID 1 from core 0: 24255.98 94.75 1318.99 311.41 9184.06 00:26:20.515 ======================================================== 00:26:20.515 Total : 24255.98 94.75 1318.99 311.41 9184.06 00:26:20.515 00:26:20.515 21:39:41 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:21.890 Initializing NVMe Controllers 00:26:21.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:21.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:21.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:21.890 Initialization complete. Launching workers. 00:26:21.890 ======================================================== 00:26:21.890 Latency(us) 00:26:21.890 Device Information : IOPS MiB/s Average min max 00:26:21.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3111.90 12.16 321.03 126.28 4240.56 00:26:21.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.07 7933.55 12035.34 00:26:21.890 ======================================================== 00:26:21.890 Total : 3235.89 12.64 620.19 126.28 12035.34 00:26:21.890 00:26:21.890 21:39:42 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.270 Initializing NVMe Controllers 00:26:23.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:23.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:23.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:23.270 Initialization complete. Launching workers. 00:26:23.270 ======================================================== 00:26:23.270 Latency(us) 00:26:23.270 Device Information : IOPS MiB/s Average min max 00:26:23.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8540.33 33.36 3749.27 512.04 9611.25 00:26:23.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3966.76 15.50 8089.99 6298.34 17338.30 00:26:23.270 ======================================================== 00:26:23.270 Total : 12507.09 48.86 5125.98 512.04 17338.30 00:26:23.270 00:26:23.270 21:39:44 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:23.270 21:39:44 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.551 Initializing NVMe Controllers 00:26:26.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.551 Controller IO queue size 128, less than required. 00:26:26.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.551 Controller IO queue size 128, less than required. 00:26:26.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:26.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:26.551 Initialization complete. Launching workers. 00:26:26.551 ======================================================== 00:26:26.551 Latency(us) 00:26:26.551 Device Information : IOPS MiB/s Average min max 00:26:26.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1617.49 404.37 80789.43 55417.02 156637.25 00:26:26.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 644.00 161.00 205250.36 94883.35 328670.13 00:26:26.551 ======================================================== 00:26:26.551 Total : 2261.49 565.37 116231.76 55417.02 328670.13 00:26:26.551 00:26:26.551 21:39:46 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:26.551 No valid NVMe controllers or AIO or URING devices found 00:26:26.551 Initializing NVMe Controllers 00:26:26.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.551 Controller IO queue size 128, less than required. 00:26:26.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.551 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:26.551 Controller IO queue size 128, less than required. 00:26:26.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.551 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:26.551 WARNING: Some requested NVMe devices were skipped 00:26:26.551 21:39:47 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:29.080 Initializing NVMe Controllers 00:26:29.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:29.080 Controller IO queue size 128, less than required. 00:26:29.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:29.080 Controller IO queue size 128, less than required. 00:26:29.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:29.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:29.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:29.080 Initialization complete. Launching workers. 00:26:29.080 00:26:29.080 ==================== 00:26:29.080 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:29.080 TCP transport: 00:26:29.080 polls: 7178 00:26:29.080 idle_polls: 0 00:26:29.080 sock_completions: 7178 00:26:29.080 nvme_completions: 6045 00:26:29.080 submitted_requests: 9097 00:26:29.080 queued_requests: 1 00:26:29.080 00:26:29.080 ==================== 00:26:29.080 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:29.080 TCP transport: 00:26:29.080 polls: 7905 00:26:29.080 idle_polls: 0 00:26:29.080 sock_completions: 7905 00:26:29.080 nvme_completions: 6397 00:26:29.080 submitted_requests: 9775 00:26:29.080 queued_requests: 1 00:26:29.080 ======================================================== 00:26:29.080 Latency(us) 00:26:29.080 Device Information : IOPS MiB/s Average min max 00:26:29.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1574.09 393.52 83407.17 46646.57 152634.72 00:26:29.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1661.54 415.38 78486.64 33267.35 132776.25 00:26:29.080 ======================================================== 00:26:29.080 Total : 3235.62 808.91 80880.41 33267.35 152634.72 00:26:29.080 00:26:29.080 21:39:49 -- host/perf.sh@66 -- # sync 00:26:29.080 21:39:49 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.080 21:39:49 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:29.080 21:39:49 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:26:29.080 21:39:49 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:29.338 21:39:50 -- host/perf.sh@72 -- # ls_guid=232dbeef-fef9-41cf-856a-fd00390a27fe 00:26:29.338 21:39:50 -- host/perf.sh@73 -- # get_lvs_free_mb 232dbeef-fef9-41cf-856a-fd00390a27fe 00:26:29.338 21:39:50 -- common/autotest_common.sh@1343 -- # local lvs_uuid=232dbeef-fef9-41cf-856a-fd00390a27fe 00:26:29.338 21:39:50 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:29.338 21:39:50 -- common/autotest_common.sh@1345 -- # local fc 00:26:29.338 21:39:50 -- common/autotest_common.sh@1346 -- # local cs 00:26:29.338 21:39:50 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:29.595 21:39:50 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:29.595 { 00:26:29.595 "uuid": "232dbeef-fef9-41cf-856a-fd00390a27fe", 00:26:29.595 "name": "lvs_0", 00:26:29.595 "base_bdev": "Nvme0n1", 00:26:29.595 "total_data_clusters": 1278, 00:26:29.595 "free_clusters": 1278, 00:26:29.595 "block_size": 4096, 00:26:29.595 "cluster_size": 4194304 00:26:29.595 } 00:26:29.595 ]' 00:26:29.596 21:39:50 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="232dbeef-fef9-41cf-856a-fd00390a27fe") .free_clusters' 00:26:29.596 21:39:50 -- common/autotest_common.sh@1348 -- # fc=1278 00:26:29.596 21:39:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="232dbeef-fef9-41cf-856a-fd00390a27fe") .cluster_size' 00:26:29.853 5112 00:26:29.853 21:39:50 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:29.853 21:39:50 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:26:29.853 21:39:50 -- common/autotest_common.sh@1353 -- # echo 5112 00:26:29.853 21:39:50 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:26:29.853 21:39:50 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 232dbeef-fef9-41cf-856a-fd00390a27fe lbd_0 5112 00:26:30.111 21:39:50 -- host/perf.sh@80 -- # lb_guid=3f1c87bc-97b7-4e42-8e0d-0d8b934d5312 00:26:30.111 21:39:50 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3f1c87bc-97b7-4e42-8e0d-0d8b934d5312 lvs_n_0 00:26:30.368 21:39:51 -- host/perf.sh@83 -- # ls_nested_guid=51441529-b3df-453e-a22a-6d2dadf3a7b0 00:26:30.368 21:39:51 -- host/perf.sh@84 -- # get_lvs_free_mb 51441529-b3df-453e-a22a-6d2dadf3a7b0 00:26:30.368 21:39:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=51441529-b3df-453e-a22a-6d2dadf3a7b0 00:26:30.368 21:39:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:30.369 21:39:51 -- common/autotest_common.sh@1345 -- # local fc 00:26:30.369 21:39:51 -- common/autotest_common.sh@1346 -- # local cs 00:26:30.369 21:39:51 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:30.626 21:39:51 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:30.626 { 00:26:30.626 "uuid": "232dbeef-fef9-41cf-856a-fd00390a27fe", 00:26:30.626 "name": "lvs_0", 00:26:30.626 "base_bdev": "Nvme0n1", 00:26:30.626 "total_data_clusters": 1278, 00:26:30.626 "free_clusters": 0, 00:26:30.626 "block_size": 4096, 00:26:30.626 "cluster_size": 4194304 00:26:30.626 }, 00:26:30.626 { 00:26:30.626 "uuid": "51441529-b3df-453e-a22a-6d2dadf3a7b0", 00:26:30.626 "name": "lvs_n_0", 00:26:30.626 "base_bdev": "3f1c87bc-97b7-4e42-8e0d-0d8b934d5312", 00:26:30.626 "total_data_clusters": 1276, 00:26:30.626 "free_clusters": 1276, 00:26:30.626 "block_size": 4096, 00:26:30.626 "cluster_size": 4194304 00:26:30.626 } 00:26:30.626 ]' 00:26:30.626 21:39:51 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="51441529-b3df-453e-a22a-6d2dadf3a7b0") .free_clusters' 00:26:30.626 21:39:51 -- common/autotest_common.sh@1348 -- # fc=1276 00:26:30.626 21:39:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="51441529-b3df-453e-a22a-6d2dadf3a7b0") .cluster_size' 00:26:30.626 5104 00:26:30.626 21:39:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:30.626 21:39:51 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:26:30.626 21:39:51 -- common/autotest_common.sh@1353 -- # echo 5104 00:26:30.626 21:39:51 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:26:30.626 21:39:51 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 51441529-b3df-453e-a22a-6d2dadf3a7b0 lbd_nest_0 5104 00:26:30.884 21:39:51 -- host/perf.sh@88 -- # lb_nested_guid=95097623-1296-49d5-a5bb-d02ad8bba440 00:26:30.884 21:39:51 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:31.141 21:39:52 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:31.141 21:39:52 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 95097623-1296-49d5-a5bb-d02ad8bba440 00:26:31.399 21:39:52 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.655 21:39:52 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:31.655 21:39:52 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:31.655 21:39:52 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:31.655 21:39:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:31.655 21:39:52 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.218 No valid NVMe controllers or AIO or URING devices found 00:26:32.218 Initializing NVMe Controllers 00:26:32.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.218 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:32.218 WARNING: Some requested NVMe devices were skipped 00:26:32.218 21:39:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:32.218 21:39:52 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.439 Initializing NVMe Controllers 00:26:44.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.439 Initialization complete. Launching workers. 00:26:44.439 ======================================================== 00:26:44.439 Latency(us) 00:26:44.439 Device Information : IOPS MiB/s Average min max 00:26:44.439 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 997.80 124.72 1001.81 331.08 6669.23 00:26:44.439 ======================================================== 00:26:44.439 Total : 997.80 124.72 1001.81 331.08 6669.23 00:26:44.439 00:26:44.439 21:40:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:44.439 21:40:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.439 21:40:03 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.439 No valid NVMe controllers or AIO or URING devices found 00:26:44.439 Initializing NVMe Controllers 00:26:44.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.439 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:44.439 WARNING: Some requested NVMe devices were skipped 00:26:44.439 21:40:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.439 21:40:03 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.485 Initializing NVMe Controllers 00:26:54.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:54.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:54.485 Initialization complete. Launching workers. 00:26:54.485 ======================================================== 00:26:54.485 Latency(us) 00:26:54.485 Device Information : IOPS MiB/s Average min max 00:26:54.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1301.70 162.71 24629.54 5113.57 75283.24 00:26:54.485 ======================================================== 00:26:54.485 Total : 1301.70 162.71 24629.54 5113.57 75283.24 00:26:54.485 00:26:54.485 21:40:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:54.485 21:40:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:54.485 21:40:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.485 No valid NVMe controllers or AIO or URING devices found 00:26:54.485 Initializing NVMe Controllers 00:26:54.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:54.485 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:26:54.485 WARNING: Some requested NVMe devices were skipped 00:26:54.485 21:40:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:54.485 21:40:14 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.509 Initializing NVMe Controllers 00:27:04.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.510 Controller IO queue size 128, less than required. 00:27:04.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:04.510 Initialization complete. Launching workers. 00:27:04.510 ======================================================== 00:27:04.510 Latency(us) 00:27:04.510 Device Information : IOPS MiB/s Average min max 00:27:04.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4016.25 502.03 31932.24 7362.81 65493.46 00:27:04.510 ======================================================== 00:27:04.510 Total : 4016.25 502.03 31932.24 7362.81 65493.46 00:27:04.510 00:27:04.510 21:40:24 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.510 21:40:24 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 95097623-1296-49d5-a5bb-d02ad8bba440 00:27:04.510 21:40:25 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:04.510 21:40:25 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3f1c87bc-97b7-4e42-8e0d-0d8b934d5312 00:27:04.767 21:40:25 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:05.024 21:40:25 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:05.024 21:40:25 -- host/perf.sh@114 -- # nvmftestfini 00:27:05.024 21:40:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:05.024 21:40:25 -- nvmf/common.sh@116 -- # sync 00:27:05.024 21:40:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:05.024 21:40:25 -- nvmf/common.sh@119 -- # set +e 00:27:05.024 21:40:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:05.024 21:40:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:05.024 rmmod nvme_tcp 00:27:05.024 rmmod nvme_fabrics 00:27:05.024 rmmod nvme_keyring 00:27:05.024 21:40:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:05.024 21:40:25 -- nvmf/common.sh@123 -- # set -e 00:27:05.024 21:40:25 -- nvmf/common.sh@124 -- # return 0 00:27:05.024 21:40:25 -- nvmf/common.sh@477 -- # '[' -n 80718 ']' 00:27:05.024 21:40:25 -- nvmf/common.sh@478 -- # killprocess 80718 00:27:05.024 21:40:25 -- common/autotest_common.sh@926 -- # '[' -z 80718 ']' 00:27:05.024 21:40:25 -- common/autotest_common.sh@930 -- # kill -0 80718 00:27:05.024 21:40:25 -- common/autotest_common.sh@931 -- # uname 00:27:05.024 21:40:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:05.024 21:40:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80718 00:27:05.024 21:40:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:05.024 killing process with pid 80718 00:27:05.024 21:40:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:05.024 21:40:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80718' 00:27:05.024 21:40:25 -- common/autotest_common.sh@945 -- # kill 80718 00:27:05.024 21:40:25 -- common/autotest_common.sh@950 -- # wait 80718 00:27:06.921 21:40:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:06.921 21:40:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:06.921 21:40:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:06.921 21:40:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.921 21:40:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:06.921 21:40:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.921 21:40:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.921 21:40:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.921 21:40:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:06.921 ************************************ 00:27:06.921 END TEST nvmf_perf 00:27:06.921 ************************************ 00:27:06.921 00:27:06.921 real 0m51.032s 00:27:06.921 user 3m12.446s 00:27:06.921 sys 0m13.201s 00:27:06.921 21:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.921 21:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:06.921 21:40:27 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:06.921 21:40:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:06.921 21:40:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:06.921 21:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:06.921 ************************************ 00:27:06.921 START TEST nvmf_fio_host 00:27:06.921 ************************************ 00:27:06.921 21:40:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:06.921 * Looking for test storage... 00:27:06.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:06.921 21:40:27 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:06.921 21:40:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.921 21:40:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.921 21:40:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.921 21:40:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.921 21:40:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.922 21:40:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.922 21:40:27 -- paths/export.sh@5 -- # export PATH 00:27:06.922 21:40:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.922 21:40:27 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:06.922 21:40:27 -- nvmf/common.sh@7 -- # uname -s 00:27:06.922 21:40:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.922 21:40:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.922 21:40:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.922 21:40:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.922 21:40:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.922 21:40:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.922 21:40:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.922 21:40:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.922 21:40:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.922 21:40:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.922 21:40:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:27:06.922 21:40:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:27:06.922 21:40:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.922 21:40:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.922 21:40:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:06.922 21:40:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:06.922 21:40:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.922 21:40:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.922 21:40:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.922 21:40:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.922 21:40:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.922 21:40:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.922 21:40:27 -- paths/export.sh@5 -- # export PATH 00:27:06.922 21:40:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.922 21:40:27 -- nvmf/common.sh@46 -- # : 0 00:27:06.922 21:40:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:06.922 21:40:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:06.922 21:40:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:06.922 21:40:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.922 21:40:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.922 21:40:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:06.922 21:40:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:06.922 21:40:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:06.922 21:40:27 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.922 21:40:27 -- host/fio.sh@14 -- # nvmftestinit 00:27:06.922 21:40:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:06.922 21:40:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.922 21:40:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:06.922 21:40:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:06.922 21:40:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:06.922 21:40:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.922 21:40:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.922 21:40:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.922 21:40:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:06.922 21:40:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:06.922 21:40:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:06.922 21:40:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:06.922 21:40:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:06.922 21:40:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:06.922 21:40:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.922 21:40:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.922 21:40:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:06.922 21:40:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:06.922 21:40:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:06.922 21:40:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:06.922 21:40:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:06.922 21:40:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.922 21:40:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:06.922 21:40:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:06.922 21:40:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:06.922 21:40:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:06.922 21:40:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:06.922 21:40:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:06.922 Cannot find device "nvmf_tgt_br" 00:27:06.922 21:40:27 -- nvmf/common.sh@154 -- # true 00:27:06.922 21:40:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:06.922 Cannot find device "nvmf_tgt_br2" 00:27:06.922 21:40:27 -- nvmf/common.sh@155 -- # true 00:27:06.922 21:40:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:06.922 21:40:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:06.922 Cannot find device "nvmf_tgt_br" 00:27:06.922 21:40:27 -- nvmf/common.sh@157 -- # true 00:27:06.922 21:40:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:06.922 Cannot find device "nvmf_tgt_br2" 00:27:06.922 21:40:27 -- nvmf/common.sh@158 -- # true 00:27:06.922 21:40:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:06.922 21:40:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:06.922 21:40:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:06.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.922 21:40:27 -- nvmf/common.sh@161 -- # true 00:27:06.922 21:40:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:06.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.922 21:40:27 -- nvmf/common.sh@162 -- # true 00:27:06.922 21:40:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:06.922 21:40:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:06.922 21:40:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:06.922 21:40:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:06.922 21:40:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:06.922 21:40:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:06.922 21:40:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:06.922 21:40:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:06.922 21:40:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:06.922 21:40:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:06.922 21:40:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:06.922 21:40:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:06.922 21:40:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:06.922 21:40:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:06.922 21:40:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:07.178 21:40:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:07.178 21:40:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:07.178 21:40:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:07.178 21:40:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:07.178 21:40:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:07.178 21:40:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:07.178 21:40:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:07.178 21:40:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:07.178 21:40:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:07.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:27:07.178 00:27:07.178 --- 10.0.0.2 ping statistics --- 00:27:07.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.178 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:07.178 21:40:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:07.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:07.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:27:07.178 00:27:07.178 --- 10.0.0.3 ping statistics --- 00:27:07.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.178 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:27:07.178 21:40:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:07.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:27:07.178 00:27:07.178 --- 10.0.0.1 ping statistics --- 00:27:07.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.178 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:27:07.178 21:40:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.178 21:40:27 -- nvmf/common.sh@421 -- # return 0 00:27:07.178 21:40:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:07.178 21:40:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.178 21:40:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:07.178 21:40:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:07.178 21:40:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.178 21:40:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:07.178 21:40:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:07.178 21:40:27 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:07.178 21:40:27 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:07.178 21:40:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:07.178 21:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:07.178 21:40:27 -- host/fio.sh@24 -- # nvmfpid=81540 00:27:07.178 21:40:27 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:07.178 21:40:27 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:07.178 21:40:27 -- host/fio.sh@28 -- # waitforlisten 81540 00:27:07.178 21:40:27 -- common/autotest_common.sh@819 -- # '[' -z 81540 ']' 00:27:07.178 21:40:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.178 21:40:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:07.178 21:40:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.178 21:40:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:07.178 21:40:27 -- common/autotest_common.sh@10 -- # set +x 00:27:07.178 [2024-07-11 21:40:28.032118] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:07.178 [2024-07-11 21:40:28.032212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.435 [2024-07-11 21:40:28.171340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.435 [2024-07-11 21:40:28.271528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:07.435 [2024-07-11 21:40:28.271937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.435 [2024-07-11 21:40:28.272002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.435 [2024-07-11 21:40:28.272232] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.435 [2024-07-11 21:40:28.272625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.435 [2024-07-11 21:40:28.272708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.435 [2024-07-11 21:40:28.272841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.435 [2024-07-11 21:40:28.272842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.366 21:40:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:08.366 21:40:29 -- common/autotest_common.sh@852 -- # return 0 00:27:08.366 21:40:29 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:08.366 [2024-07-11 21:40:29.217592] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.366 21:40:29 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:08.366 21:40:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:08.366 21:40:29 -- common/autotest_common.sh@10 -- # set +x 00:27:08.366 21:40:29 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:08.624 Malloc1 00:27:08.624 21:40:29 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.189 21:40:29 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:09.189 21:40:30 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.447 [2024-07-11 21:40:30.324846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.447 21:40:30 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:09.724 21:40:30 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:27:09.724 21:40:30 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:09.724 21:40:30 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:09.724 21:40:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:09.724 21:40:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.724 21:40:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:09.724 21:40:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:09.724 21:40:30 -- common/autotest_common.sh@1320 -- # shift 00:27:09.724 21:40:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:09.724 21:40:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:09.724 21:40:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:09.724 21:40:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:09.724 21:40:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:09.724 21:40:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:09.724 21:40:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:09.724 21:40:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:09.982 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:09.982 fio-3.35 00:27:09.982 Starting 1 thread 00:27:12.503 00:27:12.503 test: (groupid=0, jobs=1): err= 0: pid=81623: Thu Jul 11 21:40:33 2024 00:27:12.503 read: IOPS=9441, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2006msec) 00:27:12.503 slat (usec): min=2, max=341, avg= 2.58, stdev= 3.15 00:27:12.503 clat (usec): min=2596, max=12718, avg=7050.59, stdev=462.11 00:27:12.503 lat (usec): min=2654, max=12720, avg=7053.17, stdev=461.86 00:27:12.503 clat percentiles (usec): 00:27:12.503 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6718], 00:27:12.503 | 30.00th=[ 6849], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:27:12.503 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:27:12.503 | 99.00th=[ 8160], 99.50th=[ 8717], 99.90th=[ 9372], 99.95th=[11338], 00:27:12.503 | 99.99th=[12387] 00:27:12.503 bw ( KiB/s): min=36872, max=38328, per=99.97%, avg=37756.00, stdev=640.58, samples=4 00:27:12.503 iops : min= 9218, max= 9582, avg=9439.00, stdev=160.15, samples=4 00:27:12.503 write: IOPS=9440, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2006msec); 0 zone resets 00:27:12.503 slat (usec): min=2, max=254, avg= 2.65, stdev= 2.10 00:27:12.503 clat (usec): min=2458, max=12372, avg=6451.07, stdev=437.70 00:27:12.503 lat (usec): min=2472, max=12374, avg=6453.72, stdev=437.62 00:27:12.503 clat percentiles (usec): 00:27:12.503 | 1.00th=[ 5538], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 6128], 00:27:12.503 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:27:12.503 | 70.00th=[ 6652], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7046], 00:27:12.503 | 99.00th=[ 7439], 99.50th=[ 8029], 99.90th=[10290], 99.95th=[11469], 00:27:12.503 | 99.99th=[12387] 00:27:12.503 bw ( KiB/s): min=37504, max=38096, per=99.96%, avg=37746.00, stdev=249.79, samples=4 00:27:12.503 iops : min= 9376, max= 9524, avg=9436.50, stdev=62.45, samples=4 00:27:12.503 lat (msec) : 4=0.08%, 10=99.82%, 20=0.11% 00:27:12.503 cpu : usr=68.83%, sys=22.74%, ctx=9, majf=0, minf=5 00:27:12.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:12.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:12.503 issued rwts: total=18940,18938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:12.503 00:27:12.503 Run status group 0 (all jobs): 00:27:12.503 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2006-2006msec 00:27:12.503 WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2006-2006msec 00:27:12.503 21:40:33 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:12.503 21:40:33 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:12.503 21:40:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:12.503 21:40:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.503 21:40:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:12.503 21:40:33 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:12.503 21:40:33 -- common/autotest_common.sh@1320 -- # shift 00:27:12.503 21:40:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:12.503 21:40:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.503 21:40:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:12.503 21:40:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:12.503 21:40:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:12.503 21:40:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:12.503 21:40:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:12.504 21:40:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.504 21:40:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:12.504 21:40:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:12.504 21:40:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:12.504 21:40:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:12.504 21:40:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:12.504 21:40:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:12.504 21:40:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:12.504 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:12.504 fio-3.35 00:27:12.504 Starting 1 thread 00:27:15.025 00:27:15.025 test: (groupid=0, jobs=1): err= 0: pid=81666: Thu Jul 11 21:40:35 2024 00:27:15.025 read: IOPS=8612, BW=135MiB/s (141MB/s)(270MiB/2008msec) 00:27:15.025 slat (usec): min=3, max=116, avg= 3.75, stdev= 1.72 00:27:15.025 clat (usec): min=2519, max=17718, avg=8297.88, stdev=2739.99 00:27:15.025 lat (usec): min=2523, max=17722, avg=8301.63, stdev=2740.05 00:27:15.025 clat percentiles (usec): 00:27:15.026 | 1.00th=[ 3851], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5800], 00:27:15.026 | 30.00th=[ 6521], 40.00th=[ 7177], 50.00th=[ 7898], 60.00th=[ 8586], 00:27:15.026 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[12125], 95.00th=[13698], 00:27:15.026 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:27:15.026 | 99.99th=[17695] 00:27:15.026 bw ( KiB/s): min=62304, max=78624, per=50.88%, avg=70112.00, stdev=7903.89, samples=4 00:27:15.026 iops : min= 3894, max= 4914, avg=4382.00, stdev=493.99, samples=4 00:27:15.026 write: IOPS=5062, BW=79.1MiB/s (82.9MB/s)(143MiB/1812msec); 0 zone resets 00:27:15.026 slat (usec): min=36, max=287, avg=38.02, stdev= 5.53 00:27:15.026 clat (usec): min=3443, max=21885, avg=11674.56, stdev=2024.20 00:27:15.026 lat (usec): min=3492, max=21922, avg=11712.58, stdev=2023.94 00:27:15.026 clat percentiles (usec): 00:27:15.026 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:27:15.026 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:27:15.026 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14484], 95.00th=[15270], 00:27:15.026 | 99.00th=[17171], 99.50th=[17957], 99.90th=[20841], 99.95th=[21365], 00:27:15.026 | 99.99th=[21890] 00:27:15.026 bw ( KiB/s): min=64192, max=81664, per=89.91%, avg=72824.00, stdev=8328.63, samples=4 00:27:15.026 iops : min= 4012, max= 5104, avg=4551.50, stdev=520.54, samples=4 00:27:15.026 lat (msec) : 4=1.05%, 10=54.27%, 20=44.62%, 50=0.06% 00:27:15.026 cpu : usr=81.27%, sys=13.94%, ctx=5, majf=0, minf=2 00:27:15.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:15.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:15.026 issued rwts: total=17293,9173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:15.026 00:27:15.026 Run status group 0 (all jobs): 00:27:15.026 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=270MiB (283MB), run=2008-2008msec 00:27:15.026 WRITE: bw=79.1MiB/s (82.9MB/s), 79.1MiB/s-79.1MiB/s (82.9MB/s-82.9MB/s), io=143MiB (150MB), run=1812-1812msec 00:27:15.026 21:40:35 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.026 21:40:35 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:15.026 21:40:35 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:15.026 21:40:35 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:15.026 21:40:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:15.026 21:40:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:15.026 21:40:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:15.026 21:40:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:15.026 21:40:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:15.026 21:40:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:27:15.026 21:40:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:27:15.026 21:40:35 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:27:15.283 Nvme0n1 00:27:15.283 21:40:36 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:15.542 21:40:36 -- host/fio.sh@53 -- # ls_guid=1f4f7733-2f72-4928-ba14-13b19bca1e7c 00:27:15.542 21:40:36 -- host/fio.sh@54 -- # get_lvs_free_mb 1f4f7733-2f72-4928-ba14-13b19bca1e7c 00:27:15.542 21:40:36 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1f4f7733-2f72-4928-ba14-13b19bca1e7c 00:27:15.542 21:40:36 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:15.542 21:40:36 -- common/autotest_common.sh@1345 -- # local fc 00:27:15.542 21:40:36 -- common/autotest_common.sh@1346 -- # local cs 00:27:15.542 21:40:36 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:15.799 21:40:36 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:15.799 { 00:27:15.799 "uuid": "1f4f7733-2f72-4928-ba14-13b19bca1e7c", 00:27:15.799 "name": "lvs_0", 00:27:15.799 "base_bdev": "Nvme0n1", 00:27:15.799 "total_data_clusters": 4, 00:27:15.799 "free_clusters": 4, 00:27:15.799 "block_size": 4096, 00:27:15.799 "cluster_size": 1073741824 00:27:15.799 } 00:27:15.799 ]' 00:27:15.799 21:40:36 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1f4f7733-2f72-4928-ba14-13b19bca1e7c") .free_clusters' 00:27:15.800 21:40:36 -- common/autotest_common.sh@1348 -- # fc=4 00:27:15.800 21:40:36 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1f4f7733-2f72-4928-ba14-13b19bca1e7c") .cluster_size' 00:27:16.057 4096 00:27:16.057 21:40:36 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:16.057 21:40:36 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:27:16.057 21:40:36 -- common/autotest_common.sh@1353 -- # echo 4096 00:27:16.057 21:40:36 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:27:16.057 75b3d1a2-e7ac-473f-9223-c2f7d2b8a913 00:27:16.057 21:40:36 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:16.620 21:40:37 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:16.620 21:40:37 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:16.903 21:40:37 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.903 21:40:37 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.903 21:40:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:16.903 21:40:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.903 21:40:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:16.903 21:40:37 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:16.903 21:40:37 -- common/autotest_common.sh@1320 -- # shift 00:27:16.903 21:40:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:16.903 21:40:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:16.903 21:40:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:16.903 21:40:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:16.903 21:40:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:16.903 21:40:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:16.903 21:40:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:16.903 21:40:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:17.178 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:17.178 fio-3.35 00:27:17.178 Starting 1 thread 00:27:19.712 00:27:19.712 test: (groupid=0, jobs=1): err= 0: pid=81775: Thu Jul 11 21:40:40 2024 00:27:19.712 read: IOPS=6561, BW=25.6MiB/s (26.9MB/s)(51.4MiB/2007msec) 00:27:19.712 slat (usec): min=2, max=331, avg= 2.53, stdev= 3.63 00:27:19.712 clat (usec): min=2963, max=17812, avg=10183.06, stdev=885.16 00:27:19.712 lat (usec): min=2973, max=17814, avg=10185.59, stdev=884.86 00:27:19.712 clat percentiles (usec): 00:27:19.712 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:27:19.712 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:27:19.712 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:27:19.712 | 99.00th=[12387], 99.50th=[13304], 99.90th=[15664], 99.95th=[16581], 00:27:19.712 | 99.99th=[17695] 00:27:19.712 bw ( KiB/s): min=25296, max=27176, per=99.85%, avg=26208.00, stdev=905.87, samples=4 00:27:19.712 iops : min= 6324, max= 6794, avg=6552.00, stdev=226.47, samples=4 00:27:19.712 write: IOPS=6572, BW=25.7MiB/s (26.9MB/s)(51.5MiB/2007msec); 0 zone resets 00:27:19.712 slat (usec): min=2, max=278, avg= 2.63, stdev= 2.63 00:27:19.712 clat (usec): min=2455, max=16612, avg=9221.80, stdev=827.07 00:27:19.712 lat (usec): min=2469, max=16615, avg=9224.43, stdev=826.94 00:27:19.712 clat percentiles (usec): 00:27:19.712 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8586], 00:27:19.712 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:27:19.712 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:27:19.712 | 99.00th=[11207], 99.50th=[11994], 99.90th=[15270], 99.95th=[16188], 00:27:19.712 | 99.99th=[16581] 00:27:19.712 bw ( KiB/s): min=25856, max=26440, per=99.88%, avg=26258.00, stdev=275.15, samples=4 00:27:19.712 iops : min= 6464, max= 6610, avg=6564.50, stdev=68.79, samples=4 00:27:19.712 lat (msec) : 4=0.06%, 10=64.31%, 20=35.63% 00:27:19.712 cpu : usr=73.38%, sys=21.34%, ctx=6, majf=0, minf=5 00:27:19.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:19.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:19.712 issued rwts: total=13169,13191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:19.712 00:27:19.712 Run status group 0 (all jobs): 00:27:19.712 READ: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=51.4MiB (53.9MB), run=2007-2007msec 00:27:19.712 WRITE: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=51.5MiB (54.0MB), run=2007-2007msec 00:27:19.712 21:40:40 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:19.712 21:40:40 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:19.712 21:40:40 -- host/fio.sh@64 -- # ls_nested_guid=361fad14-fc7c-4bf0-9458-f686bd3d18be 00:27:19.712 21:40:40 -- host/fio.sh@65 -- # get_lvs_free_mb 361fad14-fc7c-4bf0-9458-f686bd3d18be 00:27:19.712 21:40:40 -- common/autotest_common.sh@1343 -- # local lvs_uuid=361fad14-fc7c-4bf0-9458-f686bd3d18be 00:27:19.712 21:40:40 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:19.712 21:40:40 -- common/autotest_common.sh@1345 -- # local fc 00:27:19.712 21:40:40 -- common/autotest_common.sh@1346 -- # local cs 00:27:19.712 21:40:40 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:19.970 21:40:40 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:19.970 { 00:27:19.970 "uuid": "1f4f7733-2f72-4928-ba14-13b19bca1e7c", 00:27:19.970 "name": "lvs_0", 00:27:19.970 "base_bdev": "Nvme0n1", 00:27:19.970 "total_data_clusters": 4, 00:27:19.970 "free_clusters": 0, 00:27:19.970 "block_size": 4096, 00:27:19.970 "cluster_size": 1073741824 00:27:19.970 }, 00:27:19.970 { 00:27:19.970 "uuid": "361fad14-fc7c-4bf0-9458-f686bd3d18be", 00:27:19.970 "name": "lvs_n_0", 00:27:19.970 "base_bdev": "75b3d1a2-e7ac-473f-9223-c2f7d2b8a913", 00:27:19.970 "total_data_clusters": 1022, 00:27:19.970 "free_clusters": 1022, 00:27:19.970 "block_size": 4096, 00:27:19.970 "cluster_size": 4194304 00:27:19.970 } 00:27:19.970 ]' 00:27:19.970 21:40:40 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="361fad14-fc7c-4bf0-9458-f686bd3d18be") .free_clusters' 00:27:19.970 21:40:40 -- common/autotest_common.sh@1348 -- # fc=1022 00:27:19.970 21:40:40 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="361fad14-fc7c-4bf0-9458-f686bd3d18be") .cluster_size' 00:27:20.228 4088 00:27:20.228 21:40:40 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:20.228 21:40:40 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:27:20.228 21:40:40 -- common/autotest_common.sh@1353 -- # echo 4088 00:27:20.228 21:40:40 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:27:20.228 08007c76-ef5b-46f8-a414-d3c45c95c750 00:27:20.486 21:40:41 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:20.486 21:40:41 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:20.744 21:40:41 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:21.002 21:40:41 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:21.002 21:40:41 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:21.002 21:40:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:21.002 21:40:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:21.002 21:40:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:21.002 21:40:41 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:21.002 21:40:41 -- common/autotest_common.sh@1320 -- # shift 00:27:21.002 21:40:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:21.002 21:40:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:21.002 21:40:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:21.002 21:40:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:21.002 21:40:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:21.002 21:40:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:21.002 21:40:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:21.002 21:40:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:21.260 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:21.260 fio-3.35 00:27:21.260 Starting 1 thread 00:27:23.793 00:27:23.793 test: (groupid=0, jobs=1): err= 0: pid=81853: Thu Jul 11 21:40:44 2024 00:27:23.793 read: IOPS=5885, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec) 00:27:23.793 slat (usec): min=2, max=194, avg= 2.60, stdev= 2.40 00:27:23.793 clat (usec): min=3010, max=19789, avg=11378.82, stdev=958.36 00:27:23.793 lat (usec): min=3016, max=19791, avg=11381.41, stdev=958.21 00:27:23.793 clat percentiles (usec): 00:27:23.793 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:27:23.793 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:27:23.793 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:27:23.793 | 99.00th=[13435], 99.50th=[14091], 99.90th=[18220], 99.95th=[19268], 00:27:23.793 | 99.99th=[19792] 00:27:23.793 bw ( KiB/s): min=22584, max=23904, per=99.88%, avg=23514.00, stdev=626.71, samples=4 00:27:23.793 iops : min= 5646, max= 5976, avg=5878.50, stdev=156.68, samples=4 00:27:23.793 write: IOPS=5876, BW=23.0MiB/s (24.1MB/s)(46.1MiB/2009msec); 0 zone resets 00:27:23.793 slat (usec): min=2, max=151, avg= 2.68, stdev= 1.67 00:27:23.793 clat (usec): min=1917, max=19522, avg=10291.85, stdev=912.04 00:27:23.793 lat (usec): min=1926, max=19524, avg=10294.54, stdev=911.99 00:27:23.793 clat percentiles (usec): 00:27:23.793 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:27:23.793 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:27:23.793 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:27:23.793 | 99.00th=[12256], 99.50th=[12649], 99.90th=[16909], 99.95th=[19268], 00:27:23.793 | 99.99th=[19530] 00:27:23.793 bw ( KiB/s): min=23424, max=23552, per=99.94%, avg=23490.00, stdev=71.67, samples=4 00:27:23.793 iops : min= 5856, max= 5888, avg=5872.50, stdev=17.92, samples=4 00:27:23.793 lat (msec) : 2=0.01%, 4=0.06%, 10=20.61%, 20=79.33% 00:27:23.793 cpu : usr=75.10%, sys=19.52%, ctx=2, majf=0, minf=5 00:27:23.793 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:23.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:23.793 issued rwts: total=11824,11805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:23.793 00:27:23.793 Run status group 0 (all jobs): 00:27:23.793 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.4MB), run=2009-2009msec 00:27:23.793 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.4MB), run=2009-2009msec 00:27:23.793 21:40:44 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:23.793 21:40:44 -- host/fio.sh@74 -- # sync 00:27:23.793 21:40:44 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:24.052 21:40:44 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:24.309 21:40:45 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:24.567 21:40:45 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:24.825 21:40:45 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:25.083 21:40:45 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:25.083 21:40:45 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:25.083 21:40:45 -- host/fio.sh@86 -- # nvmftestfini 00:27:25.083 21:40:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:25.083 21:40:45 -- nvmf/common.sh@116 -- # sync 00:27:25.083 21:40:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:25.083 21:40:45 -- nvmf/common.sh@119 -- # set +e 00:27:25.083 21:40:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:25.083 21:40:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:25.083 rmmod nvme_tcp 00:27:25.083 rmmod nvme_fabrics 00:27:25.083 rmmod nvme_keyring 00:27:25.083 21:40:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:25.083 21:40:45 -- nvmf/common.sh@123 -- # set -e 00:27:25.083 21:40:45 -- nvmf/common.sh@124 -- # return 0 00:27:25.083 21:40:45 -- nvmf/common.sh@477 -- # '[' -n 81540 ']' 00:27:25.083 21:40:45 -- nvmf/common.sh@478 -- # killprocess 81540 00:27:25.083 21:40:45 -- common/autotest_common.sh@926 -- # '[' -z 81540 ']' 00:27:25.083 21:40:45 -- common/autotest_common.sh@930 -- # kill -0 81540 00:27:25.083 21:40:45 -- common/autotest_common.sh@931 -- # uname 00:27:25.083 21:40:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:25.083 21:40:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81540 00:27:25.083 killing process with pid 81540 00:27:25.083 21:40:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:25.083 21:40:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:25.083 21:40:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81540' 00:27:25.083 21:40:45 -- common/autotest_common.sh@945 -- # kill 81540 00:27:25.083 21:40:45 -- common/autotest_common.sh@950 -- # wait 81540 00:27:25.341 21:40:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:25.341 21:40:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:25.341 21:40:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:25.341 21:40:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.341 21:40:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:25.341 21:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.341 21:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.341 21:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.341 21:40:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:25.341 00:27:25.341 real 0m18.712s 00:27:25.341 user 1m22.809s 00:27:25.341 sys 0m4.374s 00:27:25.341 21:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.341 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:27:25.341 ************************************ 00:27:25.341 END TEST nvmf_fio_host 00:27:25.341 ************************************ 00:27:25.341 21:40:46 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:25.341 21:40:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:25.341 21:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:25.341 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:27:25.599 ************************************ 00:27:25.599 START TEST nvmf_failover 00:27:25.599 ************************************ 00:27:25.599 21:40:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:25.599 * Looking for test storage... 00:27:25.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:25.600 21:40:46 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:25.600 21:40:46 -- nvmf/common.sh@7 -- # uname -s 00:27:25.600 21:40:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.600 21:40:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.600 21:40:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.600 21:40:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.600 21:40:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.600 21:40:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.600 21:40:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.600 21:40:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.600 21:40:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.600 21:40:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.600 21:40:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:27:25.600 21:40:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:27:25.600 21:40:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.600 21:40:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.600 21:40:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:25.600 21:40:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:25.600 21:40:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.600 21:40:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.600 21:40:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.600 21:40:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.600 21:40:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.600 21:40:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.600 21:40:46 -- paths/export.sh@5 -- # export PATH 00:27:25.600 21:40:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.600 21:40:46 -- nvmf/common.sh@46 -- # : 0 00:27:25.600 21:40:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:25.600 21:40:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:25.600 21:40:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:25.600 21:40:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.600 21:40:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.600 21:40:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:25.600 21:40:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:25.600 21:40:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:25.600 21:40:46 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:25.600 21:40:46 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:25.600 21:40:46 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.600 21:40:46 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:25.600 21:40:46 -- host/failover.sh@18 -- # nvmftestinit 00:27:25.600 21:40:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:25.600 21:40:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.600 21:40:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:25.600 21:40:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:25.600 21:40:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:25.600 21:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.600 21:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.600 21:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.600 21:40:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:25.600 21:40:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:25.600 21:40:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:25.600 21:40:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:25.600 21:40:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:25.600 21:40:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:25.600 21:40:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.600 21:40:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.600 21:40:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:25.600 21:40:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:25.600 21:40:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:25.600 21:40:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:25.600 21:40:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:25.600 21:40:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.600 21:40:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:25.600 21:40:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:25.600 21:40:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:25.600 21:40:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:25.600 21:40:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:25.600 21:40:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:25.600 Cannot find device "nvmf_tgt_br" 00:27:25.600 21:40:46 -- nvmf/common.sh@154 -- # true 00:27:25.600 21:40:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:25.600 Cannot find device "nvmf_tgt_br2" 00:27:25.600 21:40:46 -- nvmf/common.sh@155 -- # true 00:27:25.600 21:40:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:25.600 21:40:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:25.600 Cannot find device "nvmf_tgt_br" 00:27:25.600 21:40:46 -- nvmf/common.sh@157 -- # true 00:27:25.600 21:40:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:25.600 Cannot find device "nvmf_tgt_br2" 00:27:25.600 21:40:46 -- nvmf/common.sh@158 -- # true 00:27:25.600 21:40:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:25.600 21:40:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:25.600 21:40:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:25.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.600 21:40:46 -- nvmf/common.sh@161 -- # true 00:27:25.600 21:40:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:25.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.600 21:40:46 -- nvmf/common.sh@162 -- # true 00:27:25.600 21:40:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:25.600 21:40:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:25.859 21:40:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:25.859 21:40:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:25.859 21:40:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:25.859 21:40:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:25.859 21:40:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:25.859 21:40:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:25.859 21:40:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:25.859 21:40:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:25.859 21:40:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:25.859 21:40:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:25.859 21:40:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:25.859 21:40:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:25.859 21:40:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:25.859 21:40:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:25.859 21:40:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:25.859 21:40:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:25.859 21:40:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:25.859 21:40:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:25.859 21:40:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:25.859 21:40:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:25.859 21:40:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:25.859 21:40:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:25.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:27:25.859 00:27:25.859 --- 10.0.0.2 ping statistics --- 00:27:25.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.859 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:27:25.859 21:40:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:25.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:25.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:27:25.859 00:27:25.859 --- 10.0.0.3 ping statistics --- 00:27:25.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.859 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:25.859 21:40:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:25.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:27:25.859 00:27:25.859 --- 10.0.0.1 ping statistics --- 00:27:25.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.859 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:25.859 21:40:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.859 21:40:46 -- nvmf/common.sh@421 -- # return 0 00:27:25.859 21:40:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:25.859 21:40:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.859 21:40:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:25.859 21:40:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:25.859 21:40:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.859 21:40:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:25.859 21:40:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:25.859 21:40:46 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:25.859 21:40:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:25.859 21:40:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:25.859 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:27:25.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.859 21:40:46 -- nvmf/common.sh@469 -- # nvmfpid=82090 00:27:25.859 21:40:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:25.859 21:40:46 -- nvmf/common.sh@470 -- # waitforlisten 82090 00:27:25.859 21:40:46 -- common/autotest_common.sh@819 -- # '[' -z 82090 ']' 00:27:25.859 21:40:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.859 21:40:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:25.859 21:40:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.859 21:40:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:25.859 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:27:25.859 [2024-07-11 21:40:46.798978] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:25.860 [2024-07-11 21:40:46.799085] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.118 [2024-07-11 21:40:46.941867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.118 [2024-07-11 21:40:47.040324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:26.118 [2024-07-11 21:40:47.040800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.118 [2024-07-11 21:40:47.040942] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.118 [2024-07-11 21:40:47.041097] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.118 [2024-07-11 21:40:47.041433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.118 [2024-07-11 21:40:47.041587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.118 [2024-07-11 21:40:47.041594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.052 21:40:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:27.052 21:40:47 -- common/autotest_common.sh@852 -- # return 0 00:27:27.052 21:40:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:27.052 21:40:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:27.052 21:40:47 -- common/autotest_common.sh@10 -- # set +x 00:27:27.052 21:40:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.052 21:40:47 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:27.340 [2024-07-11 21:40:48.046126] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.340 21:40:48 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:27.598 Malloc0 00:27:27.598 21:40:48 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:27.855 21:40:48 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:28.113 21:40:48 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.370 [2024-07-11 21:40:49.062635] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.371 21:40:49 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:28.371 [2024-07-11 21:40:49.286784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:28.371 21:40:49 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:28.629 [2024-07-11 21:40:49.551027] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:28.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:28.629 21:40:49 -- host/failover.sh@31 -- # bdevperf_pid=82143 00:27:28.629 21:40:49 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:28.629 21:40:49 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.629 21:40:49 -- host/failover.sh@34 -- # waitforlisten 82143 /var/tmp/bdevperf.sock 00:27:28.629 21:40:49 -- common/autotest_common.sh@819 -- # '[' -z 82143 ']' 00:27:28.629 21:40:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:28.629 21:40:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:28.629 21:40:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:28.629 21:40:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:28.629 21:40:49 -- common/autotest_common.sh@10 -- # set +x 00:27:30.000 21:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.000 21:40:50 -- common/autotest_common.sh@852 -- # return 0 00:27:30.000 21:40:50 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:30.000 NVMe0n1 00:27:30.000 21:40:50 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:30.564 00:27:30.564 21:40:51 -- host/failover.sh@39 -- # run_test_pid=82171 00:27:30.564 21:40:51 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:30.564 21:40:51 -- host/failover.sh@41 -- # sleep 1 00:27:31.495 21:40:52 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.752 [2024-07-11 21:40:52.477313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.752 [2024-07-11 21:40:52.477378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.752 [2024-07-11 21:40:52.477391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 [2024-07-11 21:40:52.477580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d377d0 is same with the state(5) to be set 00:27:31.753 21:40:52 -- host/failover.sh@45 -- # sleep 3 00:27:35.107 21:40:55 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:35.107 00:27:35.107 21:40:55 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:35.107 [2024-07-11 21:40:56.054901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.054969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.054982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.054992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.107 [2024-07-11 21:40:56.055101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.108 [2024-07-11 21:40:56.055199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(5) to be set 00:27:35.366 21:40:56 -- host/failover.sh@50 -- # sleep 3 00:27:38.645 21:40:59 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.645 [2024-07-11 21:40:59.337767] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.645 21:40:59 -- host/failover.sh@55 -- # sleep 1 00:27:39.578 21:41:00 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:39.835 [2024-07-11 21:41:00.621169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 [2024-07-11 21:41:00.621353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edbcc0 is same with the state(5) to be set 00:27:39.835 21:41:00 -- host/failover.sh@59 -- # wait 82171 00:27:46.393 0 00:27:46.393 21:41:06 -- host/failover.sh@61 -- # killprocess 82143 00:27:46.393 21:41:06 -- common/autotest_common.sh@926 -- # '[' -z 82143 ']' 00:27:46.393 21:41:06 -- common/autotest_common.sh@930 -- # kill -0 82143 00:27:46.393 21:41:06 -- common/autotest_common.sh@931 -- # uname 00:27:46.393 21:41:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:46.393 21:41:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82143 00:27:46.393 21:41:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:46.393 21:41:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:46.393 21:41:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82143' 00:27:46.393 killing process with pid 82143 00:27:46.393 21:41:06 -- common/autotest_common.sh@945 -- # kill 82143 00:27:46.393 21:41:06 -- common/autotest_common.sh@950 -- # wait 82143 00:27:46.393 21:41:06 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:46.393 [2024-07-11 21:40:49.614835] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:46.393 [2024-07-11 21:40:49.614950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82143 ] 00:27:46.393 [2024-07-11 21:40:49.751653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.393 [2024-07-11 21:40:49.850915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.393 Running I/O for 15 seconds... 00:27:46.393 [2024-07-11 21:40:52.477647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.477979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.477992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.393 [2024-07-11 21:40:52.478387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.393 [2024-07-11 21:40:52.478468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.393 [2024-07-11 21:40:52.478515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.393 [2024-07-11 21:40:52.478574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.393 [2024-07-11 21:40:52.478852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.393 [2024-07-11 21:40:52.478951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.393 [2024-07-11 21:40:52.478966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.393 [2024-07-11 21:40:52.478980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.478996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.479126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.479163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.479466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.479539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.479703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.479733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.479822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.479981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.479996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.480135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.480171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.480289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.480347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.480377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.394 [2024-07-11 21:40:52.480442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.394 [2024-07-11 21:40:52.480500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.394 [2024-07-11 21:40:52.480515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.480736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.480765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.480823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.480976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.480992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.395 [2024-07-11 21:40:52.481417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:52.481718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234d3d0 is same with the state(5) to be set 00:27:46.395 [2024-07-11 21:40:52.481750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:46.395 [2024-07-11 21:40:52.481761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:46.395 [2024-07-11 21:40:52.481773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114960 len:8 PRP1 0x0 PRP2 0x0 00:27:46.395 [2024-07-11 21:40:52.481786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481847] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x234d3d0 was disconnected and freed. reset controller. 00:27:46.395 [2024-07-11 21:40:52.481864] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:46.395 [2024-07-11 21:40:52.481924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.395 [2024-07-11 21:40:52.481944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.395 [2024-07-11 21:40:52.481973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.481987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.395 [2024-07-11 21:40:52.482000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.482016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.395 [2024-07-11 21:40:52.482029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:52.482043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.395 [2024-07-11 21:40:52.482102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2325c80 (9): Bad file descriptor 00:27:46.395 [2024-07-11 21:40:52.484649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.395 [2024-07-11 21:40:52.519016] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:46.395 [2024-07-11 21:40:56.055270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:56.055321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.395 [2024-07-11 21:40:56.055378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.395 [2024-07-11 21:40:56.055395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.055984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.055998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.396 [2024-07-11 21:40:56.056654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.396 [2024-07-11 21:40:56.056758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.396 [2024-07-11 21:40:56.056771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.056787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.056801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.056816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.056830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.056847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.056861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.056877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.056891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.056906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.056920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.056936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.056956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.056972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.056986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.057815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.057983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.057998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.058012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.058041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.058070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.058099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.058135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.058166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.058195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.058226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.058256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.058286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.397 [2024-07-11 21:40:56.058315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.397 [2024-07-11 21:40:56.058351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.397 [2024-07-11 21:40:56.058368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.058668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.058697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.058786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.058881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.058911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.058979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.058999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.059043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.059139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.398 [2024-07-11 21:40:56.059169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:40:56.059317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344b80 is same with the state(5) to be set 00:27:46.398 [2024-07-11 21:40:56.059400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:46.398 [2024-07-11 21:40:56.059413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:46.398 [2024-07-11 21:40:56.059430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95360 len:8 PRP1 0x0 PRP2 0x0 00:27:46.398 [2024-07-11 21:40:56.059444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059587] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2344b80 was disconnected and freed. reset controller. 00:27:46.398 [2024-07-11 21:40:56.059616] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:46.398 [2024-07-11 21:40:56.059674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:40:56.059696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:40:56.059725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:40:56.059753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:40:56.059780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:40:56.059794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.398 [2024-07-11 21:40:56.062324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.398 [2024-07-11 21:40:56.062367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2325c80 (9): Bad file descriptor 00:27:46.398 [2024-07-11 21:40:56.093427] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:46.398 [2024-07-11 21:41:00.620051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:41:00.620143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.620164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:41:00.620179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.620194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:41:00.620208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.620222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.398 [2024-07-11 21:41:00.620236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.620249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2325c80 is same with the state(5) to be set 00:27:46.398 [2024-07-11 21:41:00.621415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.398 [2024-07-11 21:41:00.621697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.398 [2024-07-11 21:41:00.621711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.621978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.621992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.622964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.622980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.399 [2024-07-11 21:41:00.622993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.623030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.623061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.623090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.623119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.623149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.623178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.399 [2024-07-11 21:41:00.623207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.399 [2024-07-11 21:41:00.623223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.623829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.623978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.623993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.624096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.624126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.624155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.624192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.624349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.624409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.400 [2024-07-11 21:41:00.624439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.400 [2024-07-11 21:41:00.624597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.400 [2024-07-11 21:41:00.624612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.401 [2024-07-11 21:41:00.624765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.401 [2024-07-11 21:41:00.624856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.624981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.624998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.401 [2024-07-11 21:41:00.625012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.401 [2024-07-11 21:41:00.625070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.401 [2024-07-11 21:41:00.625129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.401 [2024-07-11 21:41:00.625158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.401 [2024-07-11 21:41:00.625378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2321aa0 is same with the state(5) to be set 00:27:46.401 [2024-07-11 21:41:00.625409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:46.401 [2024-07-11 21:41:00.625420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:46.401 [2024-07-11 21:41:00.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39344 len:8 PRP1 0x0 PRP2 0x0 00:27:46.401 [2024-07-11 21:41:00.625445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.401 [2024-07-11 21:41:00.625518] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2321aa0 was disconnected and freed. reset controller. 00:27:46.401 [2024-07-11 21:41:00.625538] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:46.401 [2024-07-11 21:41:00.625554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.401 [2024-07-11 21:41:00.628067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.401 [2024-07-11 21:41:00.628106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2325c80 (9): Bad file descriptor 00:27:46.401 [2024-07-11 21:41:00.658648] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:46.401 00:27:46.401 Latency(us) 00:27:46.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.401 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:46.401 Verification LBA range: start 0x0 length 0x4000 00:27:46.401 NVMe0n1 : 15.01 12052.13 47.08 321.24 0.00 10326.23 422.63 22043.93 00:27:46.401 =================================================================================================================== 00:27:46.401 Total : 12052.13 47.08 321.24 0.00 10326.23 422.63 22043.93 00:27:46.401 Received shutdown signal, test time was about 15.000000 seconds 00:27:46.401 00:27:46.401 Latency(us) 00:27:46.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.401 =================================================================================================================== 00:27:46.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:46.401 21:41:06 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:46.401 21:41:06 -- host/failover.sh@65 -- # count=3 00:27:46.401 21:41:06 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:46.401 21:41:06 -- host/failover.sh@73 -- # bdevperf_pid=82346 00:27:46.401 21:41:06 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:46.401 21:41:06 -- host/failover.sh@75 -- # waitforlisten 82346 /var/tmp/bdevperf.sock 00:27:46.401 21:41:06 -- common/autotest_common.sh@819 -- # '[' -z 82346 ']' 00:27:46.401 21:41:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:46.401 21:41:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:46.401 21:41:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:46.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:46.401 21:41:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:46.401 21:41:06 -- common/autotest_common.sh@10 -- # set +x 00:27:46.967 21:41:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:46.967 21:41:07 -- common/autotest_common.sh@852 -- # return 0 00:27:46.967 21:41:07 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:47.225 [2024-07-11 21:41:07.932637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:47.225 21:41:07 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:47.482 [2024-07-11 21:41:08.180831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:47.482 21:41:08 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:47.740 NVMe0n1 00:27:47.740 21:41:08 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:47.998 00:27:47.998 21:41:08 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:48.316 00:27:48.316 21:41:09 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:48.316 21:41:09 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:48.573 21:41:09 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:48.830 21:41:09 -- host/failover.sh@87 -- # sleep 3 00:27:52.108 21:41:12 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:52.108 21:41:12 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:52.108 21:41:12 -- host/failover.sh@90 -- # run_test_pid=82425 00:27:52.108 21:41:12 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:52.108 21:41:12 -- host/failover.sh@92 -- # wait 82425 00:27:53.479 0 00:27:53.479 21:41:14 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:53.479 [2024-07-11 21:41:06.683929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:53.479 [2024-07-11 21:41:06.684055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82346 ] 00:27:53.479 [2024-07-11 21:41:06.825448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.479 [2024-07-11 21:41:06.920667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.479 [2024-07-11 21:41:09.608834] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:53.479 [2024-07-11 21:41:09.608989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.479 [2024-07-11 21:41:09.609015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.479 [2024-07-11 21:41:09.609034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.479 [2024-07-11 21:41:09.609048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.479 [2024-07-11 21:41:09.609064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.479 [2024-07-11 21:41:09.609079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.479 [2024-07-11 21:41:09.609093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.479 [2024-07-11 21:41:09.609106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.479 [2024-07-11 21:41:09.609120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:53.479 [2024-07-11 21:41:09.609179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:53.479 [2024-07-11 21:41:09.609212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf92c80 (9): Bad file descriptor 00:27:53.479 [2024-07-11 21:41:09.617743] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:53.479 Running I/O for 1 seconds... 00:27:53.479 00:27:53.479 Latency(us) 00:27:53.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.479 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:53.479 Verification LBA range: start 0x0 length 0x4000 00:27:53.479 NVMe0n1 : 1.01 12801.88 50.01 0.00 0.00 9943.41 1072.41 11617.75 00:27:53.479 =================================================================================================================== 00:27:53.479 Total : 12801.88 50.01 0.00 0.00 9943.41 1072.41 11617.75 00:27:53.479 21:41:14 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.479 21:41:14 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:53.479 21:41:14 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:53.737 21:41:14 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.737 21:41:14 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:53.994 21:41:14 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.251 21:41:15 -- host/failover.sh@101 -- # sleep 3 00:27:57.542 21:41:18 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.542 21:41:18 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:57.542 21:41:18 -- host/failover.sh@108 -- # killprocess 82346 00:27:57.542 21:41:18 -- common/autotest_common.sh@926 -- # '[' -z 82346 ']' 00:27:57.543 21:41:18 -- common/autotest_common.sh@930 -- # kill -0 82346 00:27:57.543 21:41:18 -- common/autotest_common.sh@931 -- # uname 00:27:57.543 21:41:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:57.543 21:41:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82346 00:27:57.543 21:41:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:57.543 21:41:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:57.543 killing process with pid 82346 00:27:57.543 21:41:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82346' 00:27:57.543 21:41:18 -- common/autotest_common.sh@945 -- # kill 82346 00:27:57.543 21:41:18 -- common/autotest_common.sh@950 -- # wait 82346 00:27:57.801 21:41:18 -- host/failover.sh@110 -- # sync 00:27:57.801 21:41:18 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.059 21:41:18 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:58.059 21:41:18 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:58.059 21:41:18 -- host/failover.sh@116 -- # nvmftestfini 00:27:58.059 21:41:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:58.059 21:41:18 -- nvmf/common.sh@116 -- # sync 00:27:58.059 21:41:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:58.059 21:41:18 -- nvmf/common.sh@119 -- # set +e 00:27:58.059 21:41:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:58.059 21:41:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:58.059 rmmod nvme_tcp 00:27:58.059 rmmod nvme_fabrics 00:27:58.059 rmmod nvme_keyring 00:27:58.059 21:41:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:58.059 21:41:18 -- nvmf/common.sh@123 -- # set -e 00:27:58.059 21:41:18 -- nvmf/common.sh@124 -- # return 0 00:27:58.059 21:41:18 -- nvmf/common.sh@477 -- # '[' -n 82090 ']' 00:27:58.059 21:41:18 -- nvmf/common.sh@478 -- # killprocess 82090 00:27:58.059 21:41:18 -- common/autotest_common.sh@926 -- # '[' -z 82090 ']' 00:27:58.059 21:41:18 -- common/autotest_common.sh@930 -- # kill -0 82090 00:27:58.059 21:41:18 -- common/autotest_common.sh@931 -- # uname 00:27:58.059 21:41:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:58.059 21:41:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82090 00:27:58.059 21:41:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:58.059 21:41:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:58.059 killing process with pid 82090 00:27:58.059 21:41:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82090' 00:27:58.059 21:41:18 -- common/autotest_common.sh@945 -- # kill 82090 00:27:58.059 21:41:18 -- common/autotest_common.sh@950 -- # wait 82090 00:27:58.316 21:41:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:58.316 21:41:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:58.316 21:41:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:58.316 21:41:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.316 21:41:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:58.316 21:41:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.316 21:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.316 21:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.316 21:41:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:58.316 00:27:58.316 real 0m32.937s 00:27:58.316 user 2m7.935s 00:27:58.316 sys 0m5.462s 00:27:58.316 21:41:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:58.316 ************************************ 00:27:58.316 END TEST nvmf_failover 00:27:58.316 ************************************ 00:27:58.316 21:41:19 -- common/autotest_common.sh@10 -- # set +x 00:27:58.574 21:41:19 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:58.574 21:41:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:58.574 21:41:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:58.574 21:41:19 -- common/autotest_common.sh@10 -- # set +x 00:27:58.574 ************************************ 00:27:58.574 START TEST nvmf_discovery 00:27:58.574 ************************************ 00:27:58.574 21:41:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:58.574 * Looking for test storage... 00:27:58.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:58.574 21:41:19 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:58.574 21:41:19 -- nvmf/common.sh@7 -- # uname -s 00:27:58.574 21:41:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.574 21:41:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.574 21:41:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.574 21:41:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.574 21:41:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.574 21:41:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.574 21:41:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.574 21:41:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.574 21:41:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.574 21:41:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.574 21:41:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:27:58.574 21:41:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:27:58.574 21:41:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.574 21:41:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.574 21:41:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:58.574 21:41:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:58.574 21:41:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.574 21:41:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.574 21:41:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.574 21:41:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.575 21:41:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.575 21:41:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.575 21:41:19 -- paths/export.sh@5 -- # export PATH 00:27:58.575 21:41:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.575 21:41:19 -- nvmf/common.sh@46 -- # : 0 00:27:58.575 21:41:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:58.575 21:41:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:58.575 21:41:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:58.575 21:41:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.575 21:41:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.575 21:41:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:58.575 21:41:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:58.575 21:41:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:58.575 21:41:19 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:58.575 21:41:19 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:58.575 21:41:19 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:58.575 21:41:19 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:58.575 21:41:19 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:58.575 21:41:19 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:58.575 21:41:19 -- host/discovery.sh@25 -- # nvmftestinit 00:27:58.575 21:41:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:58.575 21:41:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.575 21:41:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:58.575 21:41:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:58.575 21:41:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:58.575 21:41:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.575 21:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.575 21:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.575 21:41:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:58.575 21:41:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:58.575 21:41:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:58.575 21:41:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:58.575 21:41:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:58.575 21:41:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:58.575 21:41:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.575 21:41:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.575 21:41:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:58.575 21:41:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:58.575 21:41:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:58.575 21:41:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:58.575 21:41:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:58.575 21:41:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.575 21:41:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:58.575 21:41:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:58.575 21:41:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:58.575 21:41:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:58.575 21:41:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:58.575 21:41:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:58.575 Cannot find device "nvmf_tgt_br" 00:27:58.575 21:41:19 -- nvmf/common.sh@154 -- # true 00:27:58.575 21:41:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:58.575 Cannot find device "nvmf_tgt_br2" 00:27:58.575 21:41:19 -- nvmf/common.sh@155 -- # true 00:27:58.575 21:41:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:58.575 21:41:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:58.575 Cannot find device "nvmf_tgt_br" 00:27:58.575 21:41:19 -- nvmf/common.sh@157 -- # true 00:27:58.575 21:41:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:58.575 Cannot find device "nvmf_tgt_br2" 00:27:58.575 21:41:19 -- nvmf/common.sh@158 -- # true 00:27:58.575 21:41:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:58.575 21:41:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:58.575 21:41:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:58.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:58.575 21:41:19 -- nvmf/common.sh@161 -- # true 00:27:58.575 21:41:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:58.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:58.575 21:41:19 -- nvmf/common.sh@162 -- # true 00:27:58.575 21:41:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:58.833 21:41:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:58.833 21:41:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:58.833 21:41:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:58.833 21:41:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:58.833 21:41:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:58.833 21:41:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:58.833 21:41:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:58.833 21:41:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:58.833 21:41:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:58.833 21:41:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:58.833 21:41:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:58.833 21:41:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:58.833 21:41:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:58.833 21:41:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:58.833 21:41:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:58.833 21:41:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:58.833 21:41:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:58.833 21:41:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:58.833 21:41:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:58.833 21:41:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:58.833 21:41:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:58.833 21:41:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:58.833 21:41:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:58.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:27:58.833 00:27:58.833 --- 10.0.0.2 ping statistics --- 00:27:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.833 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:58.833 21:41:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:58.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:58.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:27:58.833 00:27:58.833 --- 10.0.0.3 ping statistics --- 00:27:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.833 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:58.833 21:41:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:58.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:27:58.833 00:27:58.833 --- 10.0.0.1 ping statistics --- 00:27:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.833 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:27:58.833 21:41:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.833 21:41:19 -- nvmf/common.sh@421 -- # return 0 00:27:58.833 21:41:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:58.833 21:41:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.833 21:41:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:58.833 21:41:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:58.833 21:41:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.833 21:41:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:58.833 21:41:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:58.833 21:41:19 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:58.833 21:41:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:58.833 21:41:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:58.833 21:41:19 -- common/autotest_common.sh@10 -- # set +x 00:27:58.833 21:41:19 -- nvmf/common.sh@469 -- # nvmfpid=82694 00:27:58.833 21:41:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:58.833 21:41:19 -- nvmf/common.sh@470 -- # waitforlisten 82694 00:27:58.833 21:41:19 -- common/autotest_common.sh@819 -- # '[' -z 82694 ']' 00:27:58.833 21:41:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.833 21:41:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:58.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.833 21:41:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.833 21:41:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:58.833 21:41:19 -- common/autotest_common.sh@10 -- # set +x 00:27:58.833 [2024-07-11 21:41:19.771598] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:58.833 [2024-07-11 21:41:19.771699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.090 [2024-07-11 21:41:19.910890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.090 [2024-07-11 21:41:20.013258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:59.090 [2024-07-11 21:41:20.013445] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.090 [2024-07-11 21:41:20.013461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.090 [2024-07-11 21:41:20.013472] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.090 [2024-07-11 21:41:20.013526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.023 21:41:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:00.023 21:41:20 -- common/autotest_common.sh@852 -- # return 0 00:28:00.023 21:41:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:00.023 21:41:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:00.023 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:28:00.023 21:41:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.023 21:41:20 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.023 21:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.023 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:28:00.023 [2024-07-11 21:41:20.833238] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.023 21:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.023 21:41:20 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:00.023 21:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.023 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:28:00.023 [2024-07-11 21:41:20.841371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:00.023 21:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.023 21:41:20 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:00.023 21:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.023 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:28:00.023 null0 00:28:00.023 21:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.023 21:41:20 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:00.023 21:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.023 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:28:00.023 null1 00:28:00.023 21:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.023 21:41:20 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:00.023 21:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.023 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:28:00.023 21:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.023 21:41:20 -- host/discovery.sh@45 -- # hostpid=82726 00:28:00.024 21:41:20 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:00.024 21:41:20 -- host/discovery.sh@46 -- # waitforlisten 82726 /tmp/host.sock 00:28:00.024 21:41:20 -- common/autotest_common.sh@819 -- # '[' -z 82726 ']' 00:28:00.024 21:41:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:00.024 21:41:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:00.024 21:41:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:00.024 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:00.024 21:41:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:00.024 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:28:00.024 [2024-07-11 21:41:20.934312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:00.024 [2024-07-11 21:41:20.934746] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82726 ] 00:28:00.281 [2024-07-11 21:41:21.076306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.281 [2024-07-11 21:41:21.188789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:00.281 [2024-07-11 21:41:21.189158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.215 21:41:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:01.215 21:41:21 -- common/autotest_common.sh@852 -- # return 0 00:28:01.215 21:41:21 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.215 21:41:21 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:01.215 21:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.215 21:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:01.215 21:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.215 21:41:21 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:01.215 21:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.215 21:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:01.215 21:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.215 21:41:21 -- host/discovery.sh@72 -- # notify_id=0 00:28:01.215 21:41:21 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:01.215 21:41:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:01.215 21:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.215 21:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:01.215 21:41:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:01.215 21:41:21 -- host/discovery.sh@59 -- # sort 00:28:01.215 21:41:21 -- host/discovery.sh@59 -- # xargs 00:28:01.216 21:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.216 21:41:21 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:01.216 21:41:21 -- host/discovery.sh@79 -- # get_bdev_list 00:28:01.216 21:41:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.216 21:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.216 21:41:21 -- common/autotest_common.sh@10 -- # set +x 00:28:01.216 21:41:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:01.216 21:41:21 -- host/discovery.sh@55 -- # sort 00:28:01.216 21:41:21 -- host/discovery.sh@55 -- # xargs 00:28:01.216 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.216 21:41:22 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:01.216 21:41:22 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:01.216 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.216 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.216 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.216 21:41:22 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:01.216 21:41:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:01.216 21:41:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:01.216 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.216 21:41:22 -- host/discovery.sh@59 -- # sort 00:28:01.216 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.216 21:41:22 -- host/discovery.sh@59 -- # xargs 00:28:01.216 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.216 21:41:22 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:01.216 21:41:22 -- host/discovery.sh@83 -- # get_bdev_list 00:28:01.216 21:41:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.216 21:41:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:01.216 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.216 21:41:22 -- host/discovery.sh@55 -- # sort 00:28:01.216 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.216 21:41:22 -- host/discovery.sh@55 -- # xargs 00:28:01.216 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.216 21:41:22 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:01.216 21:41:22 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:01.216 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.216 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.474 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.474 21:41:22 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:01.474 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.474 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # sort 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # xargs 00:28:01.474 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.474 21:41:22 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:01.474 21:41:22 -- host/discovery.sh@87 -- # get_bdev_list 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:01.474 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # sort 00:28:01.474 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # xargs 00:28:01.474 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.474 21:41:22 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:01.474 21:41:22 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:01.474 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.474 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.474 [2024-07-11 21:41:22.289800] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.474 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.474 21:41:22 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # sort 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:01.474 21:41:22 -- host/discovery.sh@59 -- # xargs 00:28:01.474 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.474 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.474 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.474 21:41:22 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:01.474 21:41:22 -- host/discovery.sh@93 -- # get_bdev_list 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.474 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:01.474 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # xargs 00:28:01.474 21:41:22 -- host/discovery.sh@55 -- # sort 00:28:01.474 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.474 21:41:22 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:01.474 21:41:22 -- host/discovery.sh@94 -- # get_notification_count 00:28:01.474 21:41:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:01.474 21:41:22 -- host/discovery.sh@74 -- # jq '. | length' 00:28:01.474 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.474 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.732 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.732 21:41:22 -- host/discovery.sh@74 -- # notification_count=0 00:28:01.732 21:41:22 -- host/discovery.sh@75 -- # notify_id=0 00:28:01.732 21:41:22 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:01.732 21:41:22 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:01.732 21:41:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:01.732 21:41:22 -- common/autotest_common.sh@10 -- # set +x 00:28:01.732 21:41:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:01.732 21:41:22 -- host/discovery.sh@100 -- # sleep 1 00:28:02.032 [2024-07-11 21:41:22.926426] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:02.032 [2024-07-11 21:41:22.926507] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:02.032 [2024-07-11 21:41:22.926538] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:02.032 [2024-07-11 21:41:22.932473] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:02.288 [2024-07-11 21:41:22.988780] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:02.288 [2024-07-11 21:41:22.989030] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:02.545 21:41:23 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:02.545 21:41:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:02.545 21:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.545 21:41:23 -- common/autotest_common.sh@10 -- # set +x 00:28:02.545 21:41:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:02.803 21:41:23 -- host/discovery.sh@59 -- # sort 00:28:02.803 21:41:23 -- host/discovery.sh@59 -- # xargs 00:28:02.803 21:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@102 -- # get_bdev_list 00:28:02.803 21:41:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:02.803 21:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.803 21:41:23 -- common/autotest_common.sh@10 -- # set +x 00:28:02.803 21:41:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:02.803 21:41:23 -- host/discovery.sh@55 -- # xargs 00:28:02.803 21:41:23 -- host/discovery.sh@55 -- # sort 00:28:02.803 21:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:02.803 21:41:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:02.803 21:41:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:02.803 21:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.803 21:41:23 -- host/discovery.sh@63 -- # sort -n 00:28:02.803 21:41:23 -- common/autotest_common.sh@10 -- # set +x 00:28:02.803 21:41:23 -- host/discovery.sh@63 -- # xargs 00:28:02.803 21:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@104 -- # get_notification_count 00:28:02.803 21:41:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:02.803 21:41:23 -- host/discovery.sh@74 -- # jq '. | length' 00:28:02.803 21:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.803 21:41:23 -- common/autotest_common.sh@10 -- # set +x 00:28:02.803 21:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@74 -- # notification_count=1 00:28:02.803 21:41:23 -- host/discovery.sh@75 -- # notify_id=1 00:28:02.803 21:41:23 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:02.803 21:41:23 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:02.804 21:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.804 21:41:23 -- common/autotest_common.sh@10 -- # set +x 00:28:02.804 21:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.804 21:41:23 -- host/discovery.sh@109 -- # sleep 1 00:28:04.179 21:41:24 -- host/discovery.sh@110 -- # get_bdev_list 00:28:04.179 21:41:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.179 21:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.179 21:41:24 -- host/discovery.sh@55 -- # sort 00:28:04.179 21:41:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:04.179 21:41:24 -- common/autotest_common.sh@10 -- # set +x 00:28:04.179 21:41:24 -- host/discovery.sh@55 -- # xargs 00:28:04.179 21:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.179 21:41:24 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:04.179 21:41:24 -- host/discovery.sh@111 -- # get_notification_count 00:28:04.179 21:41:24 -- host/discovery.sh@74 -- # jq '. | length' 00:28:04.179 21:41:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:04.179 21:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.179 21:41:24 -- common/autotest_common.sh@10 -- # set +x 00:28:04.179 21:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.179 21:41:24 -- host/discovery.sh@74 -- # notification_count=1 00:28:04.179 21:41:24 -- host/discovery.sh@75 -- # notify_id=2 00:28:04.179 21:41:24 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:04.179 21:41:24 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:04.179 21:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.179 21:41:24 -- common/autotest_common.sh@10 -- # set +x 00:28:04.179 [2024-07-11 21:41:24.848716] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:04.179 [2024-07-11 21:41:24.849371] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:04.179 [2024-07-11 21:41:24.849403] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:04.179 21:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.179 21:41:24 -- host/discovery.sh@117 -- # sleep 1 00:28:04.179 [2024-07-11 21:41:24.855365] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:04.179 [2024-07-11 21:41:24.919690] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:04.179 [2024-07-11 21:41:24.919732] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:04.179 [2024-07-11 21:41:24.919741] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:05.110 21:41:25 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:05.110 21:41:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:05.110 21:41:25 -- host/discovery.sh@59 -- # sort 00:28:05.110 21:41:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:05.110 21:41:25 -- host/discovery.sh@59 -- # xargs 00:28:05.110 21:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.110 21:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.110 21:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.110 21:41:25 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.110 21:41:25 -- host/discovery.sh@119 -- # get_bdev_list 00:28:05.110 21:41:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:05.110 21:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.110 21:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.110 21:41:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:05.110 21:41:25 -- host/discovery.sh@55 -- # sort 00:28:05.110 21:41:25 -- host/discovery.sh@55 -- # xargs 00:28:05.110 21:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.110 21:41:25 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:05.110 21:41:25 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:05.110 21:41:25 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:05.110 21:41:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.110 21:41:25 -- common/autotest_common.sh@10 -- # set +x 00:28:05.110 21:41:25 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:05.110 21:41:25 -- host/discovery.sh@63 -- # sort -n 00:28:05.110 21:41:25 -- host/discovery.sh@63 -- # xargs 00:28:05.110 21:41:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.110 21:41:26 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:05.110 21:41:26 -- host/discovery.sh@121 -- # get_notification_count 00:28:05.110 21:41:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:05.110 21:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.110 21:41:26 -- common/autotest_common.sh@10 -- # set +x 00:28:05.110 21:41:26 -- host/discovery.sh@74 -- # jq '. | length' 00:28:05.110 21:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.368 21:41:26 -- host/discovery.sh@74 -- # notification_count=0 00:28:05.368 21:41:26 -- host/discovery.sh@75 -- # notify_id=2 00:28:05.368 21:41:26 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:05.368 21:41:26 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.368 21:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.368 21:41:26 -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 [2024-07-11 21:41:26.080066] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:05.368 [2024-07-11 21:41:26.080108] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:05.368 21:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.368 21:41:26 -- host/discovery.sh@127 -- # sleep 1 00:28:05.368 [2024-07-11 21:41:26.086209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.368 [2024-07-11 21:41:26.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.368 [2024-07-11 21:41:26.086295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.368 [2024-07-11 21:41:26.086312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.368 [2024-07-11 21:41:26.086330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.368 [2024-07-11 21:41:26.086347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.368 [2024-07-11 21:41:26.086364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.368 [2024-07-11 21:41:26.086381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.368 [2024-07-11 21:41:26.086397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd03080 is same with the state(5) to be set 00:28:05.368 [2024-07-11 21:41:26.086719] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:05.368 [2024-07-11 21:41:26.086754] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:05.368 [2024-07-11 21:41:26.086820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd03080 (9): Bad file descriptor 00:28:06.301 21:41:27 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:06.301 21:41:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:06.301 21:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.301 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:28:06.301 21:41:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:06.301 21:41:27 -- host/discovery.sh@59 -- # sort 00:28:06.301 21:41:27 -- host/discovery.sh@59 -- # xargs 00:28:06.301 21:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.301 21:41:27 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.301 21:41:27 -- host/discovery.sh@129 -- # get_bdev_list 00:28:06.301 21:41:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:06.301 21:41:27 -- host/discovery.sh@55 -- # sort 00:28:06.301 21:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.301 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:28:06.301 21:41:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:06.301 21:41:27 -- host/discovery.sh@55 -- # xargs 00:28:06.301 21:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.301 21:41:27 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:06.301 21:41:27 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:06.301 21:41:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:06.301 21:41:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:06.301 21:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.301 21:41:27 -- host/discovery.sh@63 -- # sort -n 00:28:06.301 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:28:06.301 21:41:27 -- host/discovery.sh@63 -- # xargs 00:28:06.301 21:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.559 21:41:27 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:06.559 21:41:27 -- host/discovery.sh@131 -- # get_notification_count 00:28:06.559 21:41:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:06.559 21:41:27 -- host/discovery.sh@74 -- # jq '. | length' 00:28:06.560 21:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.560 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:28:06.560 21:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.560 21:41:27 -- host/discovery.sh@74 -- # notification_count=0 00:28:06.560 21:41:27 -- host/discovery.sh@75 -- # notify_id=2 00:28:06.560 21:41:27 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:06.560 21:41:27 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:06.560 21:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.560 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:28:06.560 21:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.560 21:41:27 -- host/discovery.sh@135 -- # sleep 1 00:28:07.494 21:41:28 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:07.494 21:41:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:07.494 21:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.494 21:41:28 -- common/autotest_common.sh@10 -- # set +x 00:28:07.494 21:41:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:07.494 21:41:28 -- host/discovery.sh@59 -- # sort 00:28:07.494 21:41:28 -- host/discovery.sh@59 -- # xargs 00:28:07.494 21:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.494 21:41:28 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:07.494 21:41:28 -- host/discovery.sh@137 -- # get_bdev_list 00:28:07.494 21:41:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:07.494 21:41:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.494 21:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.494 21:41:28 -- common/autotest_common.sh@10 -- # set +x 00:28:07.494 21:41:28 -- host/discovery.sh@55 -- # sort 00:28:07.494 21:41:28 -- host/discovery.sh@55 -- # xargs 00:28:07.494 21:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.776 21:41:28 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:07.776 21:41:28 -- host/discovery.sh@138 -- # get_notification_count 00:28:07.776 21:41:28 -- host/discovery.sh@74 -- # jq '. | length' 00:28:07.776 21:41:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:07.776 21:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.776 21:41:28 -- common/autotest_common.sh@10 -- # set +x 00:28:07.776 21:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.776 21:41:28 -- host/discovery.sh@74 -- # notification_count=2 00:28:07.776 21:41:28 -- host/discovery.sh@75 -- # notify_id=4 00:28:07.776 21:41:28 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:07.776 21:41:28 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:07.776 21:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.776 21:41:28 -- common/autotest_common.sh@10 -- # set +x 00:28:08.709 [2024-07-11 21:41:29.519471] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:08.709 [2024-07-11 21:41:29.519519] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:08.709 [2024-07-11 21:41:29.519540] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:08.710 [2024-07-11 21:41:29.525513] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:08.710 [2024-07-11 21:41:29.585232] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:08.710 [2024-07-11 21:41:29.585494] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:08.710 21:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.710 21:41:29 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.710 21:41:29 -- common/autotest_common.sh@640 -- # local es=0 00:28:08.710 21:41:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.710 21:41:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:08.710 21:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:08.710 21:41:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:08.710 21:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:08.710 21:41:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.710 21:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.710 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.710 request: 00:28:08.710 { 00:28:08.710 "name": "nvme", 00:28:08.710 "trtype": "tcp", 00:28:08.710 "traddr": "10.0.0.2", 00:28:08.710 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:08.710 "adrfam": "ipv4", 00:28:08.710 "trsvcid": "8009", 00:28:08.710 "wait_for_attach": true, 00:28:08.710 "method": "bdev_nvme_start_discovery", 00:28:08.710 "req_id": 1 00:28:08.710 } 00:28:08.710 Got JSON-RPC error response 00:28:08.710 response: 00:28:08.710 { 00:28:08.710 "code": -17, 00:28:08.710 "message": "File exists" 00:28:08.710 } 00:28:08.710 21:41:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:08.710 21:41:29 -- common/autotest_common.sh@643 -- # es=1 00:28:08.710 21:41:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:08.710 21:41:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:08.710 21:41:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:08.710 21:41:29 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:08.710 21:41:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:08.710 21:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.710 21:41:29 -- host/discovery.sh@67 -- # sort 00:28:08.710 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.710 21:41:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:08.710 21:41:29 -- host/discovery.sh@67 -- # xargs 00:28:08.710 21:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.968 21:41:29 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:08.968 21:41:29 -- host/discovery.sh@147 -- # get_bdev_list 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # sort 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # xargs 00:28:08.968 21:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.968 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.968 21:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.968 21:41:29 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:08.968 21:41:29 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.968 21:41:29 -- common/autotest_common.sh@640 -- # local es=0 00:28:08.968 21:41:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.968 21:41:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:08.968 21:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:08.968 21:41:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:08.968 21:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:08.968 21:41:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:08.968 21:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.968 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.968 request: 00:28:08.968 { 00:28:08.968 "name": "nvme_second", 00:28:08.968 "trtype": "tcp", 00:28:08.968 "traddr": "10.0.0.2", 00:28:08.968 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:08.968 "adrfam": "ipv4", 00:28:08.968 "trsvcid": "8009", 00:28:08.968 "wait_for_attach": true, 00:28:08.968 "method": "bdev_nvme_start_discovery", 00:28:08.968 "req_id": 1 00:28:08.968 } 00:28:08.968 Got JSON-RPC error response 00:28:08.968 response: 00:28:08.968 { 00:28:08.968 "code": -17, 00:28:08.968 "message": "File exists" 00:28:08.968 } 00:28:08.968 21:41:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:08.968 21:41:29 -- common/autotest_common.sh@643 -- # es=1 00:28:08.968 21:41:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:08.968 21:41:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:08.968 21:41:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:08.968 21:41:29 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:08.968 21:41:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:08.968 21:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.968 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.968 21:41:29 -- host/discovery.sh@67 -- # xargs 00:28:08.968 21:41:29 -- host/discovery.sh@67 -- # sort 00:28:08.968 21:41:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:08.968 21:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.968 21:41:29 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:08.968 21:41:29 -- host/discovery.sh@153 -- # get_bdev_list 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # sort 00:28:08.968 21:41:29 -- host/discovery.sh@55 -- # xargs 00:28:08.968 21:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.968 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.968 21:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.968 21:41:29 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:08.968 21:41:29 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:08.968 21:41:29 -- common/autotest_common.sh@640 -- # local es=0 00:28:08.968 21:41:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:08.968 21:41:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:08.968 21:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:08.968 21:41:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:08.968 21:41:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:08.968 21:41:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:08.968 21:41:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.968 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:28:10.341 [2024-07-11 21:41:30.859060] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.341 [2024-07-11 21:41:30.859203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.341 [2024-07-11 21:41:30.859251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.341 [2024-07-11 21:41:30.859269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcffea0 with addr=10.0.0.2, port=8010 00:28:10.341 [2024-07-11 21:41:30.859294] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:10.341 [2024-07-11 21:41:30.859304] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:10.341 [2024-07-11 21:41:30.859315] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:11.275 [2024-07-11 21:41:31.859023] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.275 [2024-07-11 21:41:31.859150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.275 [2024-07-11 21:41:31.859197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.275 [2024-07-11 21:41:31.859214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedb1f0 with addr=10.0.0.2, port=8010 00:28:11.275 [2024-07-11 21:41:31.859238] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:11.275 [2024-07-11 21:41:31.859248] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:11.275 [2024-07-11 21:41:31.859258] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:12.209 [2024-07-11 21:41:32.858873] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:12.209 request: 00:28:12.209 { 00:28:12.209 "name": "nvme_second", 00:28:12.209 "trtype": "tcp", 00:28:12.209 "traddr": "10.0.0.2", 00:28:12.209 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:12.209 "adrfam": "ipv4", 00:28:12.209 "trsvcid": "8010", 00:28:12.209 "attach_timeout_ms": 3000, 00:28:12.209 "method": "bdev_nvme_start_discovery", 00:28:12.209 "req_id": 1 00:28:12.209 } 00:28:12.209 Got JSON-RPC error response 00:28:12.209 response: 00:28:12.209 { 00:28:12.209 "code": -110, 00:28:12.209 "message": "Connection timed out" 00:28:12.209 } 00:28:12.209 21:41:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:12.209 21:41:32 -- common/autotest_common.sh@643 -- # es=1 00:28:12.209 21:41:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:12.209 21:41:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:12.209 21:41:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:12.209 21:41:32 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:12.209 21:41:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:12.209 21:41:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:12.209 21:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.209 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:28:12.209 21:41:32 -- host/discovery.sh@67 -- # sort 00:28:12.209 21:41:32 -- host/discovery.sh@67 -- # xargs 00:28:12.209 21:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.209 21:41:32 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:12.209 21:41:32 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:12.209 21:41:32 -- host/discovery.sh@162 -- # kill 82726 00:28:12.209 21:41:32 -- host/discovery.sh@163 -- # nvmftestfini 00:28:12.209 21:41:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:12.209 21:41:32 -- nvmf/common.sh@116 -- # sync 00:28:12.209 21:41:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:12.209 21:41:32 -- nvmf/common.sh@119 -- # set +e 00:28:12.209 21:41:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:12.209 21:41:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:12.209 rmmod nvme_tcp 00:28:12.209 rmmod nvme_fabrics 00:28:12.209 rmmod nvme_keyring 00:28:12.209 21:41:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:12.209 21:41:33 -- nvmf/common.sh@123 -- # set -e 00:28:12.209 21:41:33 -- nvmf/common.sh@124 -- # return 0 00:28:12.209 21:41:33 -- nvmf/common.sh@477 -- # '[' -n 82694 ']' 00:28:12.209 21:41:33 -- nvmf/common.sh@478 -- # killprocess 82694 00:28:12.209 21:41:33 -- common/autotest_common.sh@926 -- # '[' -z 82694 ']' 00:28:12.209 21:41:33 -- common/autotest_common.sh@930 -- # kill -0 82694 00:28:12.209 21:41:33 -- common/autotest_common.sh@931 -- # uname 00:28:12.209 21:41:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:12.209 21:41:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82694 00:28:12.209 killing process with pid 82694 00:28:12.209 21:41:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:12.209 21:41:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:12.209 21:41:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82694' 00:28:12.209 21:41:33 -- common/autotest_common.sh@945 -- # kill 82694 00:28:12.209 21:41:33 -- common/autotest_common.sh@950 -- # wait 82694 00:28:12.467 21:41:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:12.467 21:41:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:12.467 21:41:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:12.467 21:41:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.467 21:41:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:12.467 21:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.467 21:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.467 21:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.467 21:41:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:12.467 00:28:12.467 real 0m14.124s 00:28:12.467 user 0m26.882s 00:28:12.467 sys 0m2.473s 00:28:12.467 21:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.467 21:41:33 -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 ************************************ 00:28:12.467 END TEST nvmf_discovery 00:28:12.467 ************************************ 00:28:12.726 21:41:33 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:12.726 21:41:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:12.726 21:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.726 21:41:33 -- common/autotest_common.sh@10 -- # set +x 00:28:12.726 ************************************ 00:28:12.726 START TEST nvmf_discovery_remove_ifc 00:28:12.726 ************************************ 00:28:12.726 21:41:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:12.726 * Looking for test storage... 00:28:12.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:12.726 21:41:33 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:12.726 21:41:33 -- nvmf/common.sh@7 -- # uname -s 00:28:12.726 21:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.726 21:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.726 21:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.726 21:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.726 21:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.726 21:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.726 21:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.726 21:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.726 21:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.726 21:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.726 21:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:28:12.726 21:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:28:12.726 21:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.726 21:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.726 21:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:12.726 21:41:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:12.726 21:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.726 21:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.726 21:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.726 21:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.726 21:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.726 21:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.726 21:41:33 -- paths/export.sh@5 -- # export PATH 00:28:12.726 21:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.726 21:41:33 -- nvmf/common.sh@46 -- # : 0 00:28:12.726 21:41:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:12.726 21:41:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:12.726 21:41:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:12.726 21:41:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.726 21:41:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.726 21:41:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:12.726 21:41:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:12.726 21:41:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:12.726 21:41:33 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:12.727 21:41:33 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:12.727 21:41:33 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:12.727 21:41:33 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:12.727 21:41:33 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:12.727 21:41:33 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:12.727 21:41:33 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:12.727 21:41:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:12.727 21:41:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.727 21:41:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:12.727 21:41:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:12.727 21:41:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:12.727 21:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.727 21:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.727 21:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.727 21:41:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:12.727 21:41:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:12.727 21:41:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:12.727 21:41:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:12.727 21:41:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:12.727 21:41:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:12.727 21:41:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.727 21:41:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.727 21:41:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:12.727 21:41:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:12.727 21:41:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:12.727 21:41:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:12.727 21:41:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:12.727 21:41:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.727 21:41:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:12.727 21:41:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:12.727 21:41:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:12.727 21:41:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:12.727 21:41:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:12.727 21:41:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:12.727 Cannot find device "nvmf_tgt_br" 00:28:12.727 21:41:33 -- nvmf/common.sh@154 -- # true 00:28:12.727 21:41:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:12.727 Cannot find device "nvmf_tgt_br2" 00:28:12.727 21:41:33 -- nvmf/common.sh@155 -- # true 00:28:12.727 21:41:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:12.727 21:41:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:12.727 Cannot find device "nvmf_tgt_br" 00:28:12.727 21:41:33 -- nvmf/common.sh@157 -- # true 00:28:12.727 21:41:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:12.727 Cannot find device "nvmf_tgt_br2" 00:28:12.727 21:41:33 -- nvmf/common.sh@158 -- # true 00:28:12.727 21:41:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:12.985 21:41:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:12.985 21:41:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:12.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.985 21:41:33 -- nvmf/common.sh@161 -- # true 00:28:12.985 21:41:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:12.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.985 21:41:33 -- nvmf/common.sh@162 -- # true 00:28:12.985 21:41:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:12.985 21:41:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:12.985 21:41:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:12.985 21:41:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:12.985 21:41:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:12.985 21:41:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:12.985 21:41:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:12.985 21:41:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:12.985 21:41:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:12.985 21:41:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:12.985 21:41:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:12.985 21:41:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:12.985 21:41:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:12.985 21:41:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:12.985 21:41:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:12.985 21:41:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:12.985 21:41:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:12.985 21:41:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:12.985 21:41:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:12.985 21:41:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:12.985 21:41:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:12.985 21:41:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:12.985 21:41:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:12.985 21:41:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:12.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:28:12.985 00:28:12.985 --- 10.0.0.2 ping statistics --- 00:28:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.985 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:12.985 21:41:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:12.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:12.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:28:12.985 00:28:12.985 --- 10.0.0.3 ping statistics --- 00:28:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.985 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:28:12.985 21:41:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:12.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:28:12.985 00:28:12.985 --- 10.0.0.1 ping statistics --- 00:28:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.985 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:28:12.985 21:41:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.985 21:41:33 -- nvmf/common.sh@421 -- # return 0 00:28:12.985 21:41:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:12.985 21:41:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.985 21:41:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:12.985 21:41:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:12.985 21:41:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.985 21:41:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:12.985 21:41:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:13.244 21:41:33 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:13.244 21:41:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:13.244 21:41:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:13.244 21:41:33 -- common/autotest_common.sh@10 -- # set +x 00:28:13.244 21:41:33 -- nvmf/common.sh@469 -- # nvmfpid=83221 00:28:13.244 21:41:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:13.244 21:41:33 -- nvmf/common.sh@470 -- # waitforlisten 83221 00:28:13.244 21:41:33 -- common/autotest_common.sh@819 -- # '[' -z 83221 ']' 00:28:13.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.244 21:41:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.244 21:41:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:13.244 21:41:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.244 21:41:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:13.244 21:41:33 -- common/autotest_common.sh@10 -- # set +x 00:28:13.244 [2024-07-11 21:41:34.001793] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:13.244 [2024-07-11 21:41:34.001908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.244 [2024-07-11 21:41:34.144844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.502 [2024-07-11 21:41:34.244577] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:13.502 [2024-07-11 21:41:34.244751] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.502 [2024-07-11 21:41:34.244767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.502 [2024-07-11 21:41:34.244778] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.502 [2024-07-11 21:41:34.244817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.436 21:41:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:14.436 21:41:35 -- common/autotest_common.sh@852 -- # return 0 00:28:14.436 21:41:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:14.436 21:41:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:14.436 21:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:14.436 21:41:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.436 21:41:35 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:14.436 21:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.436 21:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:14.436 [2024-07-11 21:41:35.087032] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.436 [2024-07-11 21:41:35.095181] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:14.436 null0 00:28:14.436 [2024-07-11 21:41:35.127136] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.436 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:14.436 21:41:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.436 21:41:35 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83253 00:28:14.436 21:41:35 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:14.436 21:41:35 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83253 /tmp/host.sock 00:28:14.436 21:41:35 -- common/autotest_common.sh@819 -- # '[' -z 83253 ']' 00:28:14.436 21:41:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:14.436 21:41:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:14.436 21:41:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:14.436 21:41:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:14.436 21:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:14.436 [2024-07-11 21:41:35.201270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:14.436 [2024-07-11 21:41:35.201649] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83253 ] 00:28:14.436 [2024-07-11 21:41:35.343926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.694 [2024-07-11 21:41:35.445309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:14.694 [2024-07-11 21:41:35.445794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.694 21:41:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:14.694 21:41:35 -- common/autotest_common.sh@852 -- # return 0 00:28:14.694 21:41:35 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.694 21:41:35 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:14.694 21:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.694 21:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:14.694 21:41:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.694 21:41:35 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:14.694 21:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.694 21:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:14.694 21:41:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.694 21:41:35 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:14.694 21:41:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.694 21:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:16.066 [2024-07-11 21:41:36.610185] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:16.066 [2024-07-11 21:41:36.610219] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:16.066 [2024-07-11 21:41:36.610240] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:16.066 [2024-07-11 21:41:36.616242] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:16.066 [2024-07-11 21:41:36.672624] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:16.066 [2024-07-11 21:41:36.672931] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:16.066 [2024-07-11 21:41:36.673006] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:16.066 [2024-07-11 21:41:36.673130] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:16.066 [2024-07-11 21:41:36.673217] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:16.066 21:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.066 [2024-07-11 21:41:36.678988] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15e8660 was disconnected and freed. delete nvme_qpair. 00:28:16.066 21:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.066 21:41:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.066 21:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.066 21:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.066 21:41:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.066 21:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:16.066 21:41:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:16.999 21:41:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.999 21:41:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.999 21:41:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.999 21:41:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.999 21:41:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.999 21:41:37 -- common/autotest_common.sh@10 -- # set +x 00:28:16.999 21:41:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.999 21:41:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.999 21:41:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:16.999 21:41:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.931 21:41:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.931 21:41:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.931 21:41:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.931 21:41:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.931 21:41:38 -- common/autotest_common.sh@10 -- # set +x 00:28:17.931 21:41:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.931 21:41:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.189 21:41:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.189 21:41:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:18.189 21:41:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:19.150 21:41:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:19.150 21:41:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.150 21:41:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:19.150 21:41:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.150 21:41:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:19.150 21:41:39 -- common/autotest_common.sh@10 -- # set +x 00:28:19.150 21:41:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:19.150 21:41:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.150 21:41:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:19.150 21:41:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:20.080 21:41:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:20.080 21:41:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:20.080 21:41:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.080 21:41:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.080 21:41:40 -- common/autotest_common.sh@10 -- # set +x 00:28:20.080 21:41:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:20.080 21:41:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:20.080 21:41:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.337 21:41:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:20.337 21:41:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:21.272 21:41:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:21.272 21:41:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.272 21:41:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.272 21:41:42 -- common/autotest_common.sh@10 -- # set +x 00:28:21.272 21:41:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:21.272 21:41:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:21.272 21:41:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:21.272 21:41:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.272 [2024-07-11 21:41:42.100054] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:21.272 [2024-07-11 21:41:42.100392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.272 [2024-07-11 21:41:42.100604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.272 [2024-07-11 21:41:42.100625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.272 [2024-07-11 21:41:42.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.272 [2024-07-11 21:41:42.100647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.272 [2024-07-11 21:41:42.100656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.272 [2024-07-11 21:41:42.100667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.272 [2024-07-11 21:41:42.100676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.272 [2024-07-11 21:41:42.100688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.272 [2024-07-11 21:41:42.100698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.272 [2024-07-11 21:41:42.100708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15accf0 is same with the state(5) to be set 00:28:21.272 21:41:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:21.272 21:41:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:21.272 [2024-07-11 21:41:42.110043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15accf0 (9): Bad file descriptor 00:28:21.272 [2024-07-11 21:41:42.120066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.211 21:41:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:22.212 21:41:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.212 21:41:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:22.212 21:41:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.212 21:41:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:22.212 21:41:43 -- common/autotest_common.sh@10 -- # set +x 00:28:22.212 21:41:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:22.212 [2024-07-11 21:41:43.123554] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:28:23.584 [2024-07-11 21:41:44.147641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:24.579 [2024-07-11 21:41:45.171589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:24.579 [2024-07-11 21:41:45.172314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15accf0 with addr=10.0.0.2, port=4420 00:28:24.579 [2024-07-11 21:41:45.172357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15accf0 is same with the state(5) to be set 00:28:24.579 [2024-07-11 21:41:45.172403] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:24.579 [2024-07-11 21:41:45.172420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:24.579 [2024-07-11 21:41:45.172434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:24.579 [2024-07-11 21:41:45.172449] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:24.579 [2024-07-11 21:41:45.173013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15accf0 (9): Bad file descriptor 00:28:24.579 [2024-07-11 21:41:45.173060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.579 [2024-07-11 21:41:45.173098] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:24.579 [2024-07-11 21:41:45.173156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.579 [2024-07-11 21:41:45.173179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.579 [2024-07-11 21:41:45.173198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.579 [2024-07-11 21:41:45.173211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.579 [2024-07-11 21:41:45.173225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.579 [2024-07-11 21:41:45.173238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.579 [2024-07-11 21:41:45.173252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.579 [2024-07-11 21:41:45.173265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.579 [2024-07-11 21:41:45.173280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.579 [2024-07-11 21:41:45.173293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.579 [2024-07-11 21:41:45.173307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:24.579 [2024-07-11 21:41:45.173397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ad100 (9): Bad file descriptor 00:28:24.579 [2024-07-11 21:41:45.174430] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:24.579 [2024-07-11 21:41:45.174453] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:24.579 21:41:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.579 21:41:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:24.579 21:41:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:25.514 21:41:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.514 21:41:46 -- common/autotest_common.sh@10 -- # set +x 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:25.514 21:41:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:25.514 21:41:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:25.515 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.515 21:41:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.515 21:41:46 -- common/autotest_common.sh@10 -- # set +x 00:28:25.515 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:25.515 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:25.515 21:41:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:25.515 21:41:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.515 21:41:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:25.515 21:41:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:26.449 [2024-07-11 21:41:47.185723] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:26.449 [2024-07-11 21:41:47.185770] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:26.449 [2024-07-11 21:41:47.185797] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:26.449 [2024-07-11 21:41:47.191762] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:26.449 [2024-07-11 21:41:47.247292] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:26.449 [2024-07-11 21:41:47.247362] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:26.449 [2024-07-11 21:41:47.247387] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:26.449 [2024-07-11 21:41:47.247405] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:26.449 [2024-07-11 21:41:47.247416] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:26.449 [2024-07-11 21:41:47.254409] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x159c9a0 was disconnected and freed. delete nvme_qpair. 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:26.449 21:41:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.449 21:41:47 -- common/autotest_common.sh@10 -- # set +x 00:28:26.449 21:41:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:26.449 21:41:47 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83253 00:28:26.449 21:41:47 -- common/autotest_common.sh@926 -- # '[' -z 83253 ']' 00:28:26.449 21:41:47 -- common/autotest_common.sh@930 -- # kill -0 83253 00:28:26.449 21:41:47 -- common/autotest_common.sh@931 -- # uname 00:28:26.449 21:41:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:26.449 21:41:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83253 00:28:26.707 killing process with pid 83253 00:28:26.707 21:41:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:26.707 21:41:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:26.707 21:41:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83253' 00:28:26.707 21:41:47 -- common/autotest_common.sh@945 -- # kill 83253 00:28:26.707 21:41:47 -- common/autotest_common.sh@950 -- # wait 83253 00:28:26.707 21:41:47 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:26.707 21:41:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:26.707 21:41:47 -- nvmf/common.sh@116 -- # sync 00:28:26.965 21:41:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:26.965 21:41:47 -- nvmf/common.sh@119 -- # set +e 00:28:26.965 21:41:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:26.965 21:41:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:26.965 rmmod nvme_tcp 00:28:26.965 rmmod nvme_fabrics 00:28:26.965 rmmod nvme_keyring 00:28:26.965 21:41:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:26.965 21:41:47 -- nvmf/common.sh@123 -- # set -e 00:28:26.965 21:41:47 -- nvmf/common.sh@124 -- # return 0 00:28:26.965 21:41:47 -- nvmf/common.sh@477 -- # '[' -n 83221 ']' 00:28:26.965 21:41:47 -- nvmf/common.sh@478 -- # killprocess 83221 00:28:26.965 21:41:47 -- common/autotest_common.sh@926 -- # '[' -z 83221 ']' 00:28:26.965 21:41:47 -- common/autotest_common.sh@930 -- # kill -0 83221 00:28:26.965 21:41:47 -- common/autotest_common.sh@931 -- # uname 00:28:26.965 21:41:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:26.965 21:41:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83221 00:28:26.965 killing process with pid 83221 00:28:26.965 21:41:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:26.965 21:41:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:26.965 21:41:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83221' 00:28:26.965 21:41:47 -- common/autotest_common.sh@945 -- # kill 83221 00:28:26.965 21:41:47 -- common/autotest_common.sh@950 -- # wait 83221 00:28:27.223 21:41:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:27.223 21:41:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:27.223 21:41:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:27.223 21:41:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:27.223 21:41:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:27.223 21:41:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.223 21:41:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.223 21:41:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.223 21:41:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:27.223 00:28:27.223 real 0m14.574s 00:28:27.223 user 0m22.773s 00:28:27.223 sys 0m2.683s 00:28:27.223 21:41:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.223 21:41:48 -- common/autotest_common.sh@10 -- # set +x 00:28:27.223 ************************************ 00:28:27.223 END TEST nvmf_discovery_remove_ifc 00:28:27.223 ************************************ 00:28:27.223 21:41:48 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:27.223 21:41:48 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:27.223 21:41:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:27.223 21:41:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.223 21:41:48 -- common/autotest_common.sh@10 -- # set +x 00:28:27.223 ************************************ 00:28:27.223 START TEST nvmf_digest 00:28:27.223 ************************************ 00:28:27.223 21:41:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:27.223 * Looking for test storage... 00:28:27.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:27.223 21:41:48 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:27.223 21:41:48 -- nvmf/common.sh@7 -- # uname -s 00:28:27.223 21:41:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.223 21:41:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.223 21:41:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.223 21:41:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.223 21:41:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.223 21:41:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.223 21:41:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.223 21:41:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.223 21:41:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.223 21:41:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.482 21:41:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:28:27.482 21:41:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:28:27.482 21:41:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.482 21:41:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.482 21:41:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:27.482 21:41:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.482 21:41:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.482 21:41:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.482 21:41:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.482 21:41:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.482 21:41:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.482 21:41:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.482 21:41:48 -- paths/export.sh@5 -- # export PATH 00:28:27.482 21:41:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.482 21:41:48 -- nvmf/common.sh@46 -- # : 0 00:28:27.482 21:41:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:27.482 21:41:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:27.482 21:41:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:27.482 21:41:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.482 21:41:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.482 21:41:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:27.482 21:41:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:27.482 21:41:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:27.482 21:41:48 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:27.482 21:41:48 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:27.482 21:41:48 -- host/digest.sh@16 -- # runtime=2 00:28:27.482 21:41:48 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:27.482 21:41:48 -- host/digest.sh@132 -- # nvmftestinit 00:28:27.482 21:41:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:27.482 21:41:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.482 21:41:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:27.482 21:41:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:27.482 21:41:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:27.482 21:41:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.482 21:41:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.482 21:41:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.482 21:41:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:27.482 21:41:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:27.482 21:41:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:27.482 21:41:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:27.482 21:41:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:27.482 21:41:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:27.482 21:41:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.482 21:41:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.482 21:41:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:27.482 21:41:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:27.482 21:41:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:27.482 21:41:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:27.482 21:41:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:27.482 21:41:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.482 21:41:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:27.482 21:41:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:27.482 21:41:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:27.482 21:41:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:27.482 21:41:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:27.482 21:41:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:27.482 Cannot find device "nvmf_tgt_br" 00:28:27.482 21:41:48 -- nvmf/common.sh@154 -- # true 00:28:27.482 21:41:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:27.482 Cannot find device "nvmf_tgt_br2" 00:28:27.482 21:41:48 -- nvmf/common.sh@155 -- # true 00:28:27.482 21:41:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:27.482 21:41:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:27.482 Cannot find device "nvmf_tgt_br" 00:28:27.483 21:41:48 -- nvmf/common.sh@157 -- # true 00:28:27.483 21:41:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:27.483 Cannot find device "nvmf_tgt_br2" 00:28:27.483 21:41:48 -- nvmf/common.sh@158 -- # true 00:28:27.483 21:41:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:27.483 21:41:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:27.483 21:41:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:27.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.483 21:41:48 -- nvmf/common.sh@161 -- # true 00:28:27.483 21:41:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:27.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.483 21:41:48 -- nvmf/common.sh@162 -- # true 00:28:27.483 21:41:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:27.483 21:41:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:27.483 21:41:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:27.483 21:41:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:27.483 21:41:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:27.483 21:41:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:27.483 21:41:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:27.483 21:41:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:27.483 21:41:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:27.483 21:41:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:27.483 21:41:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:27.483 21:41:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:27.483 21:41:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:27.483 21:41:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:27.483 21:41:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:27.483 21:41:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:27.742 21:41:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:27.742 21:41:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:27.742 21:41:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:27.742 21:41:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:27.742 21:41:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:27.742 21:41:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:27.742 21:41:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:27.742 21:41:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:27.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:28:27.742 00:28:27.742 --- 10.0.0.2 ping statistics --- 00:28:27.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.742 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:27.742 21:41:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:27.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:27.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:28:27.742 00:28:27.742 --- 10.0.0.3 ping statistics --- 00:28:27.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.742 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:27.742 21:41:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:27.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:28:27.742 00:28:27.742 --- 10.0.0.1 ping statistics --- 00:28:27.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.742 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:28:27.742 21:41:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.742 21:41:48 -- nvmf/common.sh@421 -- # return 0 00:28:27.742 21:41:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:27.742 21:41:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.742 21:41:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:27.742 21:41:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:27.742 21:41:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.742 21:41:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:27.742 21:41:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:27.742 21:41:48 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:27.742 21:41:48 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:28:27.742 21:41:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:27.742 21:41:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.742 21:41:48 -- common/autotest_common.sh@10 -- # set +x 00:28:27.742 ************************************ 00:28:27.742 START TEST nvmf_digest_clean 00:28:27.742 ************************************ 00:28:27.742 21:41:48 -- common/autotest_common.sh@1104 -- # run_digest 00:28:27.742 21:41:48 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:28:27.742 21:41:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:27.742 21:41:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:27.742 21:41:48 -- common/autotest_common.sh@10 -- # set +x 00:28:27.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.742 21:41:48 -- nvmf/common.sh@469 -- # nvmfpid=83662 00:28:27.742 21:41:48 -- nvmf/common.sh@470 -- # waitforlisten 83662 00:28:27.742 21:41:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:27.742 21:41:48 -- common/autotest_common.sh@819 -- # '[' -z 83662 ']' 00:28:27.742 21:41:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.742 21:41:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:27.742 21:41:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.742 21:41:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:27.742 21:41:48 -- common/autotest_common.sh@10 -- # set +x 00:28:27.742 [2024-07-11 21:41:48.589396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:27.742 [2024-07-11 21:41:48.589994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.001 [2024-07-11 21:41:48.730726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.001 [2024-07-11 21:41:48.835058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:28.001 [2024-07-11 21:41:48.835243] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.001 [2024-07-11 21:41:48.835260] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.001 [2024-07-11 21:41:48.835272] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.001 [2024-07-11 21:41:48.835305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.936 21:41:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:28.936 21:41:49 -- common/autotest_common.sh@852 -- # return 0 00:28:28.936 21:41:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:28.936 21:41:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:28.936 21:41:49 -- common/autotest_common.sh@10 -- # set +x 00:28:28.936 21:41:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.936 21:41:49 -- host/digest.sh@120 -- # common_target_config 00:28:28.936 21:41:49 -- host/digest.sh@43 -- # rpc_cmd 00:28:28.936 21:41:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.936 21:41:49 -- common/autotest_common.sh@10 -- # set +x 00:28:28.936 null0 00:28:28.936 [2024-07-11 21:41:49.698599] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.936 [2024-07-11 21:41:49.722709] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.936 21:41:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.936 21:41:49 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:28:28.936 21:41:49 -- host/digest.sh@77 -- # local rw bs qd 00:28:28.936 21:41:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:28.936 21:41:49 -- host/digest.sh@80 -- # rw=randread 00:28:28.936 21:41:49 -- host/digest.sh@80 -- # bs=4096 00:28:28.936 21:41:49 -- host/digest.sh@80 -- # qd=128 00:28:28.936 21:41:49 -- host/digest.sh@82 -- # bperfpid=83694 00:28:28.936 21:41:49 -- host/digest.sh@83 -- # waitforlisten 83694 /var/tmp/bperf.sock 00:28:28.936 21:41:49 -- common/autotest_common.sh@819 -- # '[' -z 83694 ']' 00:28:28.936 21:41:49 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:28.936 21:41:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.936 21:41:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:28.936 21:41:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.936 21:41:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:28.936 21:41:49 -- common/autotest_common.sh@10 -- # set +x 00:28:28.936 [2024-07-11 21:41:49.780510] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:28.936 [2024-07-11 21:41:49.780877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83694 ] 00:28:29.198 [2024-07-11 21:41:49.925532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.198 [2024-07-11 21:41:50.027670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.132 21:41:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:30.132 21:41:50 -- common/autotest_common.sh@852 -- # return 0 00:28:30.132 21:41:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:30.132 21:41:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:30.132 21:41:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:30.390 21:41:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.390 21:41:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.648 nvme0n1 00:28:30.648 21:41:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:30.648 21:41:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.648 Running I/O for 2 seconds... 00:28:33.175 00:28:33.175 Latency(us) 00:28:33.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.175 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:33.175 nvme0n1 : 2.01 14997.28 58.58 0.00 0.00 8528.75 7804.74 21328.99 00:28:33.175 =================================================================================================================== 00:28:33.175 Total : 14997.28 58.58 0.00 0.00 8528.75 7804.74 21328.99 00:28:33.175 0 00:28:33.175 21:41:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:33.175 21:41:53 -- host/digest.sh@92 -- # get_accel_stats 00:28:33.175 21:41:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:33.175 21:41:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:33.175 21:41:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:33.175 | select(.opcode=="crc32c") 00:28:33.175 | "\(.module_name) \(.executed)"' 00:28:33.175 21:41:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:33.175 21:41:53 -- host/digest.sh@93 -- # exp_module=software 00:28:33.175 21:41:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:33.175 21:41:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:33.175 21:41:53 -- host/digest.sh@97 -- # killprocess 83694 00:28:33.175 21:41:53 -- common/autotest_common.sh@926 -- # '[' -z 83694 ']' 00:28:33.175 21:41:53 -- common/autotest_common.sh@930 -- # kill -0 83694 00:28:33.175 21:41:53 -- common/autotest_common.sh@931 -- # uname 00:28:33.175 21:41:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:33.175 21:41:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83694 00:28:33.175 killing process with pid 83694 00:28:33.175 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.175 00:28:33.175 Latency(us) 00:28:33.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.175 =================================================================================================================== 00:28:33.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.175 21:41:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:33.175 21:41:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:33.175 21:41:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83694' 00:28:33.175 21:41:53 -- common/autotest_common.sh@945 -- # kill 83694 00:28:33.175 21:41:53 -- common/autotest_common.sh@950 -- # wait 83694 00:28:33.175 21:41:54 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:28:33.175 21:41:54 -- host/digest.sh@77 -- # local rw bs qd 00:28:33.175 21:41:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:33.175 21:41:54 -- host/digest.sh@80 -- # rw=randread 00:28:33.175 21:41:54 -- host/digest.sh@80 -- # bs=131072 00:28:33.175 21:41:54 -- host/digest.sh@80 -- # qd=16 00:28:33.175 21:41:54 -- host/digest.sh@82 -- # bperfpid=83762 00:28:33.175 21:41:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:33.175 21:41:54 -- host/digest.sh@83 -- # waitforlisten 83762 /var/tmp/bperf.sock 00:28:33.175 21:41:54 -- common/autotest_common.sh@819 -- # '[' -z 83762 ']' 00:28:33.175 21:41:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.175 21:41:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:33.175 21:41:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.175 21:41:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:33.175 21:41:54 -- common/autotest_common.sh@10 -- # set +x 00:28:33.433 [2024-07-11 21:41:54.164864] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:33.433 [2024-07-11 21:41:54.165223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.433 Zero copy mechanism will not be used. 00:28:33.433 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83762 ] 00:28:33.433 [2024-07-11 21:41:54.307140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.694 [2024-07-11 21:41:54.404814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.266 21:41:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:34.266 21:41:55 -- common/autotest_common.sh@852 -- # return 0 00:28:34.266 21:41:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:34.266 21:41:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:34.266 21:41:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:34.524 21:41:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.524 21:41:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.782 nvme0n1 00:28:34.782 21:41:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:34.782 21:41:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:35.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.040 Zero copy mechanism will not be used. 00:28:35.040 Running I/O for 2 seconds... 00:28:36.942 00:28:36.942 Latency(us) 00:28:36.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.942 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:36.942 nvme0n1 : 2.00 7567.12 945.89 0.00 0.00 2111.34 1936.29 10783.65 00:28:36.942 =================================================================================================================== 00:28:36.942 Total : 7567.12 945.89 0.00 0.00 2111.34 1936.29 10783.65 00:28:36.942 0 00:28:36.942 21:41:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:36.942 21:41:57 -- host/digest.sh@92 -- # get_accel_stats 00:28:36.942 21:41:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:36.942 21:41:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:36.942 | select(.opcode=="crc32c") 00:28:36.942 | "\(.module_name) \(.executed)"' 00:28:36.942 21:41:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:37.208 21:41:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:37.208 21:41:58 -- host/digest.sh@93 -- # exp_module=software 00:28:37.208 21:41:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:37.208 21:41:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:37.208 21:41:58 -- host/digest.sh@97 -- # killprocess 83762 00:28:37.208 21:41:58 -- common/autotest_common.sh@926 -- # '[' -z 83762 ']' 00:28:37.208 21:41:58 -- common/autotest_common.sh@930 -- # kill -0 83762 00:28:37.208 21:41:58 -- common/autotest_common.sh@931 -- # uname 00:28:37.208 21:41:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:37.208 21:41:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83762 00:28:37.208 killing process with pid 83762 00:28:37.208 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.208 00:28:37.208 Latency(us) 00:28:37.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.208 =================================================================================================================== 00:28:37.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.208 21:41:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:37.208 21:41:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:37.208 21:41:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83762' 00:28:37.208 21:41:58 -- common/autotest_common.sh@945 -- # kill 83762 00:28:37.208 21:41:58 -- common/autotest_common.sh@950 -- # wait 83762 00:28:37.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.466 21:41:58 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:28:37.466 21:41:58 -- host/digest.sh@77 -- # local rw bs qd 00:28:37.466 21:41:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:37.466 21:41:58 -- host/digest.sh@80 -- # rw=randwrite 00:28:37.466 21:41:58 -- host/digest.sh@80 -- # bs=4096 00:28:37.466 21:41:58 -- host/digest.sh@80 -- # qd=128 00:28:37.466 21:41:58 -- host/digest.sh@82 -- # bperfpid=83817 00:28:37.466 21:41:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:37.466 21:41:58 -- host/digest.sh@83 -- # waitforlisten 83817 /var/tmp/bperf.sock 00:28:37.466 21:41:58 -- common/autotest_common.sh@819 -- # '[' -z 83817 ']' 00:28:37.466 21:41:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.466 21:41:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:37.466 21:41:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.466 21:41:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:37.466 21:41:58 -- common/autotest_common.sh@10 -- # set +x 00:28:37.466 [2024-07-11 21:41:58.406774] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:37.466 [2024-07-11 21:41:58.407240] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83817 ] 00:28:37.724 [2024-07-11 21:41:58.551939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.724 [2024-07-11 21:41:58.648928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.657 21:41:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:38.657 21:41:59 -- common/autotest_common.sh@852 -- # return 0 00:28:38.657 21:41:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:38.657 21:41:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:38.657 21:41:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:38.915 21:41:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.915 21:41:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.173 nvme0n1 00:28:39.173 21:41:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:39.173 21:41:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:39.173 Running I/O for 2 seconds... 00:28:41.700 00:28:41.700 Latency(us) 00:28:41.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.700 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.700 nvme0n1 : 2.01 15997.11 62.49 0.00 0.00 7993.71 7357.91 15847.80 00:28:41.700 =================================================================================================================== 00:28:41.700 Total : 15997.11 62.49 0.00 0.00 7993.71 7357.91 15847.80 00:28:41.700 0 00:28:41.700 21:42:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:41.700 21:42:02 -- host/digest.sh@92 -- # get_accel_stats 00:28:41.700 21:42:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:41.700 21:42:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:41.700 21:42:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:41.700 | select(.opcode=="crc32c") 00:28:41.700 | "\(.module_name) \(.executed)"' 00:28:41.700 21:42:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:41.700 21:42:02 -- host/digest.sh@93 -- # exp_module=software 00:28:41.700 21:42:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:41.700 21:42:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:41.700 21:42:02 -- host/digest.sh@97 -- # killprocess 83817 00:28:41.700 21:42:02 -- common/autotest_common.sh@926 -- # '[' -z 83817 ']' 00:28:41.700 21:42:02 -- common/autotest_common.sh@930 -- # kill -0 83817 00:28:41.700 21:42:02 -- common/autotest_common.sh@931 -- # uname 00:28:41.700 21:42:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:41.700 21:42:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83817 00:28:41.700 killing process with pid 83817 00:28:41.700 Received shutdown signal, test time was about 2.000000 seconds 00:28:41.700 00:28:41.700 Latency(us) 00:28:41.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.700 =================================================================================================================== 00:28:41.700 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.700 21:42:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:41.700 21:42:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:41.700 21:42:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83817' 00:28:41.700 21:42:02 -- common/autotest_common.sh@945 -- # kill 83817 00:28:41.700 21:42:02 -- common/autotest_common.sh@950 -- # wait 83817 00:28:41.700 21:42:02 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:28:41.700 21:42:02 -- host/digest.sh@77 -- # local rw bs qd 00:28:41.700 21:42:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:41.700 21:42:02 -- host/digest.sh@80 -- # rw=randwrite 00:28:41.700 21:42:02 -- host/digest.sh@80 -- # bs=131072 00:28:41.700 21:42:02 -- host/digest.sh@80 -- # qd=16 00:28:41.700 21:42:02 -- host/digest.sh@82 -- # bperfpid=83877 00:28:41.700 21:42:02 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:41.700 21:42:02 -- host/digest.sh@83 -- # waitforlisten 83877 /var/tmp/bperf.sock 00:28:41.700 21:42:02 -- common/autotest_common.sh@819 -- # '[' -z 83877 ']' 00:28:41.700 21:42:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.700 21:42:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:41.700 21:42:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.700 21:42:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:41.700 21:42:02 -- common/autotest_common.sh@10 -- # set +x 00:28:41.700 [2024-07-11 21:42:02.603804] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:41.700 [2024-07-11 21:42:02.604163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83877 ] 00:28:41.700 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.700 Zero copy mechanism will not be used. 00:28:41.957 [2024-07-11 21:42:02.741249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.957 [2024-07-11 21:42:02.836101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.957 21:42:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:41.957 21:42:02 -- common/autotest_common.sh@852 -- # return 0 00:28:41.957 21:42:02 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:41.957 21:42:02 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:41.957 21:42:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.530 21:42:03 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.530 21:42:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.530 nvme0n1 00:28:42.530 21:42:03 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:42.530 21:42:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.786 Zero copy mechanism will not be used. 00:28:42.786 Running I/O for 2 seconds... 00:28:44.687 00:28:44.687 Latency(us) 00:28:44.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.687 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:44.687 nvme0n1 : 2.00 6446.01 805.75 0.00 0.00 2476.74 2055.45 5302.46 00:28:44.687 =================================================================================================================== 00:28:44.687 Total : 6446.01 805.75 0.00 0.00 2476.74 2055.45 5302.46 00:28:44.687 0 00:28:44.687 21:42:05 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:44.687 21:42:05 -- host/digest.sh@92 -- # get_accel_stats 00:28:44.687 21:42:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:44.687 21:42:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:44.687 21:42:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:44.687 | select(.opcode=="crc32c") 00:28:44.687 | "\(.module_name) \(.executed)"' 00:28:44.945 21:42:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:44.945 21:42:05 -- host/digest.sh@93 -- # exp_module=software 00:28:44.945 21:42:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:44.945 21:42:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:44.945 21:42:05 -- host/digest.sh@97 -- # killprocess 83877 00:28:44.945 21:42:05 -- common/autotest_common.sh@926 -- # '[' -z 83877 ']' 00:28:44.945 21:42:05 -- common/autotest_common.sh@930 -- # kill -0 83877 00:28:44.945 21:42:05 -- common/autotest_common.sh@931 -- # uname 00:28:44.945 21:42:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:44.945 21:42:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83877 00:28:45.203 killing process with pid 83877 00:28:45.203 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.203 00:28:45.203 Latency(us) 00:28:45.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.203 =================================================================================================================== 00:28:45.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.203 21:42:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:45.203 21:42:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:45.203 21:42:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83877' 00:28:45.203 21:42:05 -- common/autotest_common.sh@945 -- # kill 83877 00:28:45.203 21:42:05 -- common/autotest_common.sh@950 -- # wait 83877 00:28:45.203 21:42:06 -- host/digest.sh@126 -- # killprocess 83662 00:28:45.203 21:42:06 -- common/autotest_common.sh@926 -- # '[' -z 83662 ']' 00:28:45.203 21:42:06 -- common/autotest_common.sh@930 -- # kill -0 83662 00:28:45.203 21:42:06 -- common/autotest_common.sh@931 -- # uname 00:28:45.203 21:42:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:45.203 21:42:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83662 00:28:45.203 killing process with pid 83662 00:28:45.203 21:42:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:45.203 21:42:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:45.203 21:42:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83662' 00:28:45.203 21:42:06 -- common/autotest_common.sh@945 -- # kill 83662 00:28:45.203 21:42:06 -- common/autotest_common.sh@950 -- # wait 83662 00:28:45.461 ************************************ 00:28:45.461 END TEST nvmf_digest_clean 00:28:45.461 ************************************ 00:28:45.461 00:28:45.461 real 0m17.811s 00:28:45.461 user 0m34.053s 00:28:45.461 sys 0m4.740s 00:28:45.461 21:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.461 21:42:06 -- common/autotest_common.sh@10 -- # set +x 00:28:45.461 21:42:06 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:28:45.461 21:42:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:45.461 21:42:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:45.461 21:42:06 -- common/autotest_common.sh@10 -- # set +x 00:28:45.461 ************************************ 00:28:45.461 START TEST nvmf_digest_error 00:28:45.461 ************************************ 00:28:45.461 21:42:06 -- common/autotest_common.sh@1104 -- # run_digest_error 00:28:45.461 21:42:06 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:28:45.461 21:42:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:45.461 21:42:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:45.461 21:42:06 -- common/autotest_common.sh@10 -- # set +x 00:28:45.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.461 21:42:06 -- nvmf/common.sh@469 -- # nvmfpid=83953 00:28:45.461 21:42:06 -- nvmf/common.sh@470 -- # waitforlisten 83953 00:28:45.461 21:42:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:45.461 21:42:06 -- common/autotest_common.sh@819 -- # '[' -z 83953 ']' 00:28:45.461 21:42:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.461 21:42:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:45.461 21:42:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.461 21:42:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:45.461 21:42:06 -- common/autotest_common.sh@10 -- # set +x 00:28:45.722 [2024-07-11 21:42:06.449074] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:45.722 [2024-07-11 21:42:06.449174] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.722 [2024-07-11 21:42:06.583343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.980 [2024-07-11 21:42:06.674916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:45.980 [2024-07-11 21:42:06.675061] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.980 [2024-07-11 21:42:06.675076] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.980 [2024-07-11 21:42:06.675085] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.980 [2024-07-11 21:42:06.675112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.546 21:42:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:46.546 21:42:07 -- common/autotest_common.sh@852 -- # return 0 00:28:46.546 21:42:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:46.546 21:42:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:46.546 21:42:07 -- common/autotest_common.sh@10 -- # set +x 00:28:46.546 21:42:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.546 21:42:07 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:46.546 21:42:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.546 21:42:07 -- common/autotest_common.sh@10 -- # set +x 00:28:46.546 [2024-07-11 21:42:07.463607] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:46.546 21:42:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.546 21:42:07 -- host/digest.sh@104 -- # common_target_config 00:28:46.546 21:42:07 -- host/digest.sh@43 -- # rpc_cmd 00:28:46.546 21:42:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.546 21:42:07 -- common/autotest_common.sh@10 -- # set +x 00:28:46.803 null0 00:28:46.803 [2024-07-11 21:42:07.572744] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.803 [2024-07-11 21:42:07.596919] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.803 21:42:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.803 21:42:07 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:28:46.803 21:42:07 -- host/digest.sh@54 -- # local rw bs qd 00:28:46.803 21:42:07 -- host/digest.sh@56 -- # rw=randread 00:28:46.803 21:42:07 -- host/digest.sh@56 -- # bs=4096 00:28:46.803 21:42:07 -- host/digest.sh@56 -- # qd=128 00:28:46.803 21:42:07 -- host/digest.sh@58 -- # bperfpid=83985 00:28:46.803 21:42:07 -- host/digest.sh@60 -- # waitforlisten 83985 /var/tmp/bperf.sock 00:28:46.803 21:42:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:46.803 21:42:07 -- common/autotest_common.sh@819 -- # '[' -z 83985 ']' 00:28:46.803 21:42:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.803 21:42:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:46.803 21:42:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.803 21:42:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:46.803 21:42:07 -- common/autotest_common.sh@10 -- # set +x 00:28:46.803 [2024-07-11 21:42:07.655988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:46.803 [2024-07-11 21:42:07.656351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83985 ] 00:28:47.061 [2024-07-11 21:42:07.794031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.061 [2024-07-11 21:42:07.889349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.994 21:42:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:47.994 21:42:08 -- common/autotest_common.sh@852 -- # return 0 00:28:47.994 21:42:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.994 21:42:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.994 21:42:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:47.994 21:42:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.994 21:42:08 -- common/autotest_common.sh@10 -- # set +x 00:28:47.994 21:42:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.994 21:42:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.994 21:42:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.252 nvme0n1 00:28:48.510 21:42:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:48.510 21:42:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.510 21:42:09 -- common/autotest_common.sh@10 -- # set +x 00:28:48.510 21:42:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.510 21:42:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:48.510 21:42:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.510 Running I/O for 2 seconds... 00:28:48.510 [2024-07-11 21:42:09.378107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.510 [2024-07-11 21:42:09.378174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.510 [2024-07-11 21:42:09.378191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.510 [2024-07-11 21:42:09.394878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.510 [2024-07-11 21:42:09.394924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.510 [2024-07-11 21:42:09.394939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.510 [2024-07-11 21:42:09.411681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.510 [2024-07-11 21:42:09.411753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.510 [2024-07-11 21:42:09.411780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.510 [2024-07-11 21:42:09.430522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.510 [2024-07-11 21:42:09.430579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.510 [2024-07-11 21:42:09.430602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.510 [2024-07-11 21:42:09.449072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.510 [2024-07-11 21:42:09.449130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.510 [2024-07-11 21:42:09.449155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.466900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.466947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.466963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.483659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.483705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.483720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.500365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.500407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.500421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.517088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.517132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.517147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.533837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.533890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.533905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.550548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.550592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.550606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.567234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.567278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.567292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.767 [2024-07-11 21:42:09.584036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.767 [2024-07-11 21:42:09.584092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.767 [2024-07-11 21:42:09.584107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.768 [2024-07-11 21:42:09.601013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.768 [2024-07-11 21:42:09.601094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.768 [2024-07-11 21:42:09.601111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.768 [2024-07-11 21:42:09.617915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.768 [2024-07-11 21:42:09.617964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.768 [2024-07-11 21:42:09.617980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.768 [2024-07-11 21:42:09.634703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.768 [2024-07-11 21:42:09.634749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.768 [2024-07-11 21:42:09.634765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.768 [2024-07-11 21:42:09.651405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.768 [2024-07-11 21:42:09.651452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.768 [2024-07-11 21:42:09.651467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.768 [2024-07-11 21:42:09.668119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.768 [2024-07-11 21:42:09.668168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.768 [2024-07-11 21:42:09.668183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.768 [2024-07-11 21:42:09.684855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.768 [2024-07-11 21:42:09.684902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.768 [2024-07-11 21:42:09.684917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.768 [2024-07-11 21:42:09.701587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:48.768 [2024-07-11 21:42:09.701632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.768 [2024-07-11 21:42:09.701646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.027 [2024-07-11 21:42:09.718251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.027 [2024-07-11 21:42:09.718294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.027 [2024-07-11 21:42:09.718309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.027 [2024-07-11 21:42:09.734967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.027 [2024-07-11 21:42:09.735016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.027 [2024-07-11 21:42:09.735031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.027 [2024-07-11 21:42:09.751749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.027 [2024-07-11 21:42:09.751801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.027 [2024-07-11 21:42:09.751815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.027 [2024-07-11 21:42:09.768539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.768593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.768608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.785272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.785316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.785331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.802032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.802074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.802089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.818741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.818784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.818799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.835438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.835501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.835518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.852182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.852230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.852245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.868965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.869015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.869029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.885733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.885777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.885793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.902451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.902532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.902548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.919209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.919256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.935893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.935938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.935953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.952646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.952695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.952710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.028 [2024-07-11 21:42:09.969392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.028 [2024-07-11 21:42:09.969441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.028 [2024-07-11 21:42:09.969458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:09.986107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:09.986153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:09.986168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.003014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.003059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.003075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.019788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.019832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.019846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.036492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.036533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.036548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.053237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.053281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.053296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.070100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.070156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.070171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.086940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.086993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.087009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.103760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.103804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.103818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.120475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.120525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.120540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.139225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.139265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.139280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.155930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.155970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.155985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.287 [2024-07-11 21:42:10.172601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.287 [2024-07-11 21:42:10.172640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.287 [2024-07-11 21:42:10.172654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.288 [2024-07-11 21:42:10.189222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.288 [2024-07-11 21:42:10.189264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.288 [2024-07-11 21:42:10.189278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.288 [2024-07-11 21:42:10.205912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.288 [2024-07-11 21:42:10.205955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.288 [2024-07-11 21:42:10.205969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.288 [2024-07-11 21:42:10.222629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.288 [2024-07-11 21:42:10.222671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.288 [2024-07-11 21:42:10.222686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.545 [2024-07-11 21:42:10.239324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.545 [2024-07-11 21:42:10.239365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.545 [2024-07-11 21:42:10.239380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.545 [2024-07-11 21:42:10.256079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.545 [2024-07-11 21:42:10.256120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.545 [2024-07-11 21:42:10.256134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.545 [2024-07-11 21:42:10.272846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.545 [2024-07-11 21:42:10.272886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.545 [2024-07-11 21:42:10.272901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.545 [2024-07-11 21:42:10.289538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.289580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.289594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.306213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.306276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.306291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.322988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.323030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.323055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.339692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.339733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.339747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.356360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.356401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.356416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.373086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.373127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.373142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.389808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.389848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.389863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.406443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.406504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.406527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.423155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.423195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.423209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.447056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.447104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.447118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.463717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.463767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.463781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.546 [2024-07-11 21:42:10.480342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.546 [2024-07-11 21:42:10.480383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.546 [2024-07-11 21:42:10.480398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.804 [2024-07-11 21:42:10.496985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.804 [2024-07-11 21:42:10.497025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-07-11 21:42:10.497040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.804 [2024-07-11 21:42:10.513686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.804 [2024-07-11 21:42:10.513743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-07-11 21:42:10.513758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.804 [2024-07-11 21:42:10.530451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.804 [2024-07-11 21:42:10.530512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.530529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.547142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.547182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.547197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.564002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.564042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.564057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.581000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.581041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.581056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.597573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.597611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.597642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.614239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.614279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.614293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.630989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.631030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.631044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.647710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.647752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.647767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.664354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.664410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.664441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.681324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.681362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.681392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.697954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.697995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.698010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.714870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.714910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.714924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.731588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.731629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.731644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.805 [2024-07-11 21:42:10.748319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:49.805 [2024-07-11 21:42:10.748361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-07-11 21:42:10.748376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.063 [2024-07-11 21:42:10.765057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.765099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.765114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.782281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.782336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.782367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.799379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.799421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.799436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.816314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.816355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.816370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.833183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.833225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.833240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.849905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.849947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.849961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.866803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.866842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.883678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.883717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.883747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.900518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.900561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.900576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.917159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.917201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.917216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.933894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.933935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.933950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.950566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.950609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.950623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.967237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.967280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.967295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:10.983938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:10.983981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:10.983996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.064 [2024-07-11 21:42:11.000626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.064 [2024-07-11 21:42:11.000676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.064 [2024-07-11 21:42:11.000691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.017463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.017524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.017538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.034215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.034258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.034273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.050921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.050963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.050978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.067627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.067676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.067692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.084316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.084359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.084374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.101115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.101161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.101175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.117886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.117933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.117948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.134625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.134671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.134687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.151414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.151471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.151504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.168188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.168245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.168260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.184931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.184982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.184997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.201630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.201679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.201695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.218369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.218413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.323 [2024-07-11 21:42:11.218428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.323 [2024-07-11 21:42:11.235167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.323 [2024-07-11 21:42:11.235212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.324 [2024-07-11 21:42:11.235226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.324 [2024-07-11 21:42:11.251930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.324 [2024-07-11 21:42:11.251974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.324 [2024-07-11 21:42:11.251989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.324 [2024-07-11 21:42:11.268782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.324 [2024-07-11 21:42:11.268826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.324 [2024-07-11 21:42:11.268841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.582 [2024-07-11 21:42:11.285461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.582 [2024-07-11 21:42:11.285516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-11 21:42:11.285530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.582 [2024-07-11 21:42:11.302216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.582 [2024-07-11 21:42:11.302261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-11 21:42:11.302276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.582 [2024-07-11 21:42:11.318987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.582 [2024-07-11 21:42:11.319031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-11 21:42:11.319046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.582 [2024-07-11 21:42:11.335693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.582 [2024-07-11 21:42:11.335735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-11 21:42:11.335749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.582 [2024-07-11 21:42:11.352446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18b81b0) 00:28:50.582 [2024-07-11 21:42:11.352500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.582 [2024-07-11 21:42:11.352516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.582 00:28:50.582 Latency(us) 00:28:50.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.582 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:50.582 nvme0n1 : 2.01 15038.71 58.74 0.00 0.00 8506.28 7864.32 32172.22 00:28:50.582 =================================================================================================================== 00:28:50.582 Total : 15038.71 58.74 0.00 0.00 8506.28 7864.32 32172.22 00:28:50.582 0 00:28:50.582 21:42:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:50.582 21:42:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:50.582 | .driver_specific 00:28:50.582 | .nvme_error 00:28:50.582 | .status_code 00:28:50.582 | .command_transient_transport_error' 00:28:50.582 21:42:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:50.582 21:42:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:50.841 21:42:11 -- host/digest.sh@71 -- # (( 118 > 0 )) 00:28:50.841 21:42:11 -- host/digest.sh@73 -- # killprocess 83985 00:28:50.841 21:42:11 -- common/autotest_common.sh@926 -- # '[' -z 83985 ']' 00:28:50.841 21:42:11 -- common/autotest_common.sh@930 -- # kill -0 83985 00:28:50.841 21:42:11 -- common/autotest_common.sh@931 -- # uname 00:28:50.841 21:42:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:50.841 21:42:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83985 00:28:50.841 killing process with pid 83985 00:28:50.841 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.841 00:28:50.841 Latency(us) 00:28:50.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.841 =================================================================================================================== 00:28:50.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.841 21:42:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:50.842 21:42:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:50.842 21:42:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83985' 00:28:50.842 21:42:11 -- common/autotest_common.sh@945 -- # kill 83985 00:28:50.842 21:42:11 -- common/autotest_common.sh@950 -- # wait 83985 00:28:51.099 21:42:11 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:28:51.099 21:42:11 -- host/digest.sh@54 -- # local rw bs qd 00:28:51.099 21:42:11 -- host/digest.sh@56 -- # rw=randread 00:28:51.099 21:42:11 -- host/digest.sh@56 -- # bs=131072 00:28:51.099 21:42:11 -- host/digest.sh@56 -- # qd=16 00:28:51.099 21:42:11 -- host/digest.sh@58 -- # bperfpid=84045 00:28:51.099 21:42:11 -- host/digest.sh@60 -- # waitforlisten 84045 /var/tmp/bperf.sock 00:28:51.099 21:42:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:51.099 21:42:11 -- common/autotest_common.sh@819 -- # '[' -z 84045 ']' 00:28:51.099 21:42:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.099 21:42:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:51.099 21:42:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.099 21:42:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:51.099 21:42:11 -- common/autotest_common.sh@10 -- # set +x 00:28:51.099 [2024-07-11 21:42:11.940823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:51.099 [2024-07-11 21:42:11.941213] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84045 ] 00:28:51.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.099 Zero copy mechanism will not be used. 00:28:51.356 [2024-07-11 21:42:12.077943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.357 [2024-07-11 21:42:12.172067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.289 21:42:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:52.290 21:42:12 -- common/autotest_common.sh@852 -- # return 0 00:28:52.290 21:42:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.290 21:42:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.290 21:42:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:52.290 21:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.290 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:28:52.290 21:42:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.290 21:42:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.290 21:42:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.547 nvme0n1 00:28:52.805 21:42:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:52.805 21:42:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.805 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:28:52.805 21:42:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.805 21:42:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:52.805 21:42:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.805 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.805 Zero copy mechanism will not be used. 00:28:52.805 Running I/O for 2 seconds... 00:28:52.805 [2024-07-11 21:42:13.658564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.658830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.659001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.663510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.663716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.663902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.668497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.668707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.668874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.673418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.673651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.673834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.678181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.678226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.678242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.682582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.682624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.682639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.686841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.686885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.686900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.691133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.691176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.691191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.695538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.695580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.695595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.699814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.699856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.699871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.704095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.704138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.704153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.708388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.708433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.708448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.712687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.712732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.712747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.716971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.717015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.717030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.721329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.721374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.721388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.725725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.725768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.725783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.730046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.730088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.730102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.734370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.734412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.734426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.738743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.738784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.738799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.743039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.743082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.743097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.747438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.747497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.747514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.806 [2024-07-11 21:42:13.751775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:52.806 [2024-07-11 21:42:13.751819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.806 [2024-07-11 21:42:13.751833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.756158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.756203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.756218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.760514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.760559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.760574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.764874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.764919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.764934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.769169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.769213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.769228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.773453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.773514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.773530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.777686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.777728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.777743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.781961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.782003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.782017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.786178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.786221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.786235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.790425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.790467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.790505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.794623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.794665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.794679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.798932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.798977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.798993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.803231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.803276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.803290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.807558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.807600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.807615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.811898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.811947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.811961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.816213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.816272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.820593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.820634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.820648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.824889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.824932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.824947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.829198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.829249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.829264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.833546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.833604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.833619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.837883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.837941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.837956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.842225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.842276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.842291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.846661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.846705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.846720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.851006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.851049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.851064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.855345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.855389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.855404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.859593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.859636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.066 [2024-07-11 21:42:13.859651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.066 [2024-07-11 21:42:13.863787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.066 [2024-07-11 21:42:13.863830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.863845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.868131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.868176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.868191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.872391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.872435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.872450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.876613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.876655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.876669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.880865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.880909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.880924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.885095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.885140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.885155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.889336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.889379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.889394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.893640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.893681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.893696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.897945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.897988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.898003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.902206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.902250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.902265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.906598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.906639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.906653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.910910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.910953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.910968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.915157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.915200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.915214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.919521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.919562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.919577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.923813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.923854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.923868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.928117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.928161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.928174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.932454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.932512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.932529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.936767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.936810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.936825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.941071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.941114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.941129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.945425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.945468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.945501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.949788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.949830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.949844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.954140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.954184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.954199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.958422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.958465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.958480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.962707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.962749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.962763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.967041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.967084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.967099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.971400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.971445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.971459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.975787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.975830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.975845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.980166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.980210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.980225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.984495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.984536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.067 [2024-07-11 21:42:13.984549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.067 [2024-07-11 21:42:13.988844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.067 [2024-07-11 21:42:13.988886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.068 [2024-07-11 21:42:13.988901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.068 [2024-07-11 21:42:13.993061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.068 [2024-07-11 21:42:13.993104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.068 [2024-07-11 21:42:13.993119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.068 [2024-07-11 21:42:13.997414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.068 [2024-07-11 21:42:13.997458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.068 [2024-07-11 21:42:13.997472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.068 [2024-07-11 21:42:14.001701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.068 [2024-07-11 21:42:14.001744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.068 [2024-07-11 21:42:14.001758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.068 [2024-07-11 21:42:14.005930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.068 [2024-07-11 21:42:14.005973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.068 [2024-07-11 21:42:14.005988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.068 [2024-07-11 21:42:14.010244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.068 [2024-07-11 21:42:14.010286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.068 [2024-07-11 21:42:14.010300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.014577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.014624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.014638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.018910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.018950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.018965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.023231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.023275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.023290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.027539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.027582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.027597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.031832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.031875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.031890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.036146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.036188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.036202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.040476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.040530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.040544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.044760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.044811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.044826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.049042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.049083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.049098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.053371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.053418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.053432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.057732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.057775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.057789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.061974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.062016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.062030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.066314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.066355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.066370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.070697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.070738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.326 [2024-07-11 21:42:14.070752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.326 [2024-07-11 21:42:14.075038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.326 [2024-07-11 21:42:14.075080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.075095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.079452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.079524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.079540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.083661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.083701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.083733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.088026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.088068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.088083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.092404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.092448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.092462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.096678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.096720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.096734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.101060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.101104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.101119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.105354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.105398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.105412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.109657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.109699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.109713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.113907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.113949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.113963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.118260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.118303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.118317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.122659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.122701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.122716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.126986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.127029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.127043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.131350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.131393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.131408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.135693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.135735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.135749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.139988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.140032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.140047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.144249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.144290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.144305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.148591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.148633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.148648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.152831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.152874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.157167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.157208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.157222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.161471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.161526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.161541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.165667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.165706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.165721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.169900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.169941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.169956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.174295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.174338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.174352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.178641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.178680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.178695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.182895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.182937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.182951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.187301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.187357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.187387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.191837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.191894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.191925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.196431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.196472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.327 [2024-07-11 21:42:14.196534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.327 [2024-07-11 21:42:14.200917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.327 [2024-07-11 21:42:14.200958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.200988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.205315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.205358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.205372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.209780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.209820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.209835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.214093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.214136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.214150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.218406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.218447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.218461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.222683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.222724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.222738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.227473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.227528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.227543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.231815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.231857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.231871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.236224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.236268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.236282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.240536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.240578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.240592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.244794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.244837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.244851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.249093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.249136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.249151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.253440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.253499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.253515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.257722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.257786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.257801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.262103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.262154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.262168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.266405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.266447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.266461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.270715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.270756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.270770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.328 [2024-07-11 21:42:14.275020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.328 [2024-07-11 21:42:14.275062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.328 [2024-07-11 21:42:14.275077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.279385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.279429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.279444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.283665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.283707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.283721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.288038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.288081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.288096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.292514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.292595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.292610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.296915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.296957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.296972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.301344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.301386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.301400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.305802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.305844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.305858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.587 [2024-07-11 21:42:14.310030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.587 [2024-07-11 21:42:14.310072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.587 [2024-07-11 21:42:14.310087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.314359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.314400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.314415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.318716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.318758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.318772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.323079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.323111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.323125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.327415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.327458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.327473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.331782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.331824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.331838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.336147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.336190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.336205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.340442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.340499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.340515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.344875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.344917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.344931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.349223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.349267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.349282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.353541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.353582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.353596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.357843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.357884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.357898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.362168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.362213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.362228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.366584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.366623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.366637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.370923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.370966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.370981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.375317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.375374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.375404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.379830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.379887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.379917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.384300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.384357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.384386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.388737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.388778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.388808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.393073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.393117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.393148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.397509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.397549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.397564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.401869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.401911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.401926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.406149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.406190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.406205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.410440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.410505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.410522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.414683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.588 [2024-07-11 21:42:14.414726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.588 [2024-07-11 21:42:14.414740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.588 [2024-07-11 21:42:14.418956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.418998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.419013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.423186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.423229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.423243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.427549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.427590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.427605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.431839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.431882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.431897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.436146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.436189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.436204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.440429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.440472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.440508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.444761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.444803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.444817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.449000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.449044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.449058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.453321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.453365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.453379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.457653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.457694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.457709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.462016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.462059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.462073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.466350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.466395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.466410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.470823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.470865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.470879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.475294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.475352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.475383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.479763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.479804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.479834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.484223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.484264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.484293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.488772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.488814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.488828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.493150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.493193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.493209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.497522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.497573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.497603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.501853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.501895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.501910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.506179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.506222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.506237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.510583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.510623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.510637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.514918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.514960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.589 [2024-07-11 21:42:14.514974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.589 [2024-07-11 21:42:14.519324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.589 [2024-07-11 21:42:14.519367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.590 [2024-07-11 21:42:14.519382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.590 [2024-07-11 21:42:14.523603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.590 [2024-07-11 21:42:14.523644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.590 [2024-07-11 21:42:14.523659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.590 [2024-07-11 21:42:14.527876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.590 [2024-07-11 21:42:14.527918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.590 [2024-07-11 21:42:14.527932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.590 [2024-07-11 21:42:14.532224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.590 [2024-07-11 21:42:14.532268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.590 [2024-07-11 21:42:14.532282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.536574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.536616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.536629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.540873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.540916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.540930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.545176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.545218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.545233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.549395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.549437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.549451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.553668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.553709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.553728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.557922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.557963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.557977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.562244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.562287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.562302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.566452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.566514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.566531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.570762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.570806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.570820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.575052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.575094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.575108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.579449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.579508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.579524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.583804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.583847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.583862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.588040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.588093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.588107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.592480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.592535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.592550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.596879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.596921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.849 [2024-07-11 21:42:14.596936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.849 [2024-07-11 21:42:14.601223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.849 [2024-07-11 21:42:14.601268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.601283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.605617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.605660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.605674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.609921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.609964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.609978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.614238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.614282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.614296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.618528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.618568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.618582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.622865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.622908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.622922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.627206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.627250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.627265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.631592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.631637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.631652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.635833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.635875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.635889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.640145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.640189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.640203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.644518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.644559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.644574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.648770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.648812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.648826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.653067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.653111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.653126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.657458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.657522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.657538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.661821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.661863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.661878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.666532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.666731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.666871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.671282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.671327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.671341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.675544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.675586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.675600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.679778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.679821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.684050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.684095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.684110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.688288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.688331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.688345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.692558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.692599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.692613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.696787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.696829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.696844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.701136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.701179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.701194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.705384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.705428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.705442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.709664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.709706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.709721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.713932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.713975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.713989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.718245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.718290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.850 [2024-07-11 21:42:14.718304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.850 [2024-07-11 21:42:14.722631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.850 [2024-07-11 21:42:14.722671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.722685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.726905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.726948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.726962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.731305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.731349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.731364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.735632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.735674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.735689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.739979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.740022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.740037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.744309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.744353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.744368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.748587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.748628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.748643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.752855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.752895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.752909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.757129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.757173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.757187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.761468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.761523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.761538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.765741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.765782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.765798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.770054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.770111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.774347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.774389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.774404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.778637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.778676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.778691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.782838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.782879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.782893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.787081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.787124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.787139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.791341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.791382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.791396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.851 [2024-07-11 21:42:14.795572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:53.851 [2024-07-11 21:42:14.795612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.851 [2024-07-11 21:42:14.795627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.799809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.799851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.799866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.804074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.804116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.808326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.808368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.808383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.812659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.812701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.812715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.816949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.816996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.817011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.821328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.821371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.821385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.825678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.825720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.825734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.829910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.829946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.829960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.834177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.834219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.834234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.838475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.838547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.838563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.842774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.842816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.842831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.847088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.847130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.847145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.851400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.851443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.851457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.855687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.855729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.855743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.859992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.860035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.860049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.864245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.864287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.864302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.868559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.868600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.868614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.872895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.872938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.872952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.877193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.877235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.877250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.881453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.881510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.881527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.885757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.110 [2024-07-11 21:42:14.885800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.110 [2024-07-11 21:42:14.885815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.110 [2024-07-11 21:42:14.890020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.890063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.890078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.894282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.894323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.894338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.898605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.898645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.898659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.902959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.903002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.903016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.907298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.907342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.907356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.911697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.911739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.911770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.916282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.916323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.916353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.920756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.920796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.920826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.925014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.925054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.925084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.929373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.929416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.929430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.933675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.933732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.933763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.938026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.938070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.938086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.942298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.942341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.942355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.946609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.946650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.946664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.950813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.950855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.950870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.955040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.955082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.955097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.959321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.959363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.959378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.963585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.963626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.963640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.967794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.967836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.967850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.972093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.972135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.972149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.976296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.976340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.976354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.980654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.980696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.980710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.984911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.984952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.984966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.989259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.989301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.989317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.993562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.993603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.993617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:14.997897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:14.997940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:14.997955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:15.002127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:15.002169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:15.002184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:15.006374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:15.006416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:15.006430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:15.010648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.111 [2024-07-11 21:42:15.010693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.111 [2024-07-11 21:42:15.010707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.111 [2024-07-11 21:42:15.014961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.015004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.015019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.019183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.019226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.019240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.023471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.023524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.023538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.027803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.027845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.027860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.032085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.032128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.032144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.036385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.036428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.036442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.040729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.040772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.040786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.044972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.045021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.045035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.049267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.049310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.049324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.053647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.053688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.053702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.112 [2024-07-11 21:42:15.057990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.112 [2024-07-11 21:42:15.058033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.112 [2024-07-11 21:42:15.058047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.370 [2024-07-11 21:42:15.062256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.370 [2024-07-11 21:42:15.062299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.370 [2024-07-11 21:42:15.062313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.370 [2024-07-11 21:42:15.066578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.370 [2024-07-11 21:42:15.066618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.370 [2024-07-11 21:42:15.066632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.370 [2024-07-11 21:42:15.070816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.370 [2024-07-11 21:42:15.070858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.370 [2024-07-11 21:42:15.070872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.370 [2024-07-11 21:42:15.075117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.370 [2024-07-11 21:42:15.075160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.370 [2024-07-11 21:42:15.075175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.370 [2024-07-11 21:42:15.079395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.370 [2024-07-11 21:42:15.079438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.370 [2024-07-11 21:42:15.079452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.370 [2024-07-11 21:42:15.083647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.370 [2024-07-11 21:42:15.083689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.370 [2024-07-11 21:42:15.083704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.087882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.087924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.087938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.092139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.092184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.092198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.096497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.096538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.096551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.100791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.100833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.100848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.105146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.105189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.105204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.109429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.109472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.109504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.113770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.113810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.113824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.118062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.118105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.118120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.122332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.122374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.122389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.126535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.126576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.126590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.130796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.130837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.130852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.135052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.135094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.135109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.139350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.139395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.139409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.143609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.143650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.143664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.147942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.147984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.147998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.152257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.152299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.152314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.156616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.156658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.156671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.160902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.160945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.160960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.165246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.165290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.165305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.169631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.169673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.169688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.173940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.173983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.173998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.178273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.178315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.178330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.182604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.182645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.182659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.187014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.187056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.187071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.191368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.191411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.191426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.195725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.195769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.195784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.200010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.200055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.200070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.204316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.204359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.204373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.371 [2024-07-11 21:42:15.208647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.371 [2024-07-11 21:42:15.208687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.371 [2024-07-11 21:42:15.208702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.212925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.212967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.212982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.217189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.217232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.217247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.221468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.221522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.221537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.225713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.225755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.225770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.229996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.230038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.230053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.234229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.234271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.234285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.238544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.238585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.238598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.242816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.242857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.242871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.247094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.247137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.247151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.251451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.251507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.251523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.255745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.255786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.255801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.260082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.260124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.260138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.264505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.264544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.264558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.268810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.268853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.268867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.273079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.273121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.273136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.277335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.277378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.277393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.281587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.281628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.281642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.285833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.285875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.285889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.290162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.290205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.290219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.294561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.294612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.294626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.298827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.298868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.298882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.303065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.303107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.303122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.307297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.307339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.307353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.311565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.311605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.311620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.372 [2024-07-11 21:42:15.315785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.372 [2024-07-11 21:42:15.315817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.372 [2024-07-11 21:42:15.315831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.320029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.320072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.320086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.324398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.324441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.324455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.328752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.328792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.328807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.333033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.333075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.333090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.337316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.337363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.337378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.341597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.341638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.341652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.345911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.345950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.345965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.350224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.350266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.350280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.354603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.354644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.354658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.358925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.358971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.358986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.363276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.363317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.363331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.367603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.367644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.367659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.371963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.372005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.372020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.376237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.376279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.376293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.380599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.380640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.380654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.384922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.384965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.384979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.389175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.389216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.389230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.393501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.393541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.393555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.397843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.397884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.397899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.402218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.402263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.402278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.406642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.406684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.406699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.410939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.410984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.410998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.415285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.415329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.415344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.419602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.419644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.635 [2024-07-11 21:42:15.419658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.635 [2024-07-11 21:42:15.423833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.635 [2024-07-11 21:42:15.423876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.423891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.427984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.428027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.428041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.432272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.432314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.432328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.436625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.436665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.436679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.440861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.440904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.440918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.445199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.445240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.445254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.449476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.449530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.449544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.453821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.453863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.453877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.458061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.458102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.458116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.462354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.462395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.462410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.466649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.466690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.466703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.470896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.470939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.470953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.475207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.475249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.475263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.479565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.479606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.479620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.483879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.483921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.483935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.488197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.488239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.488254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.492503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.492543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.492557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.496765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.496807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.496821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.501091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.501133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.501148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.505375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.505418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.505432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.509701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.509741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.509756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.513953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.513995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.514010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.518327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.518369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.518383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.522668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.522709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.522723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.527019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.527065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.527079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.531276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.531319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.531333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.535623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.535665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.535680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.539830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.539872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.539886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.544155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.544198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.544212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.636 [2024-07-11 21:42:15.548437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.636 [2024-07-11 21:42:15.548479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.636 [2024-07-11 21:42:15.548506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.637 [2024-07-11 21:42:15.552718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.637 [2024-07-11 21:42:15.552759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.637 [2024-07-11 21:42:15.552773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.637 [2024-07-11 21:42:15.557029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.637 [2024-07-11 21:42:15.557070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.637 [2024-07-11 21:42:15.557085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.637 [2024-07-11 21:42:15.561333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.637 [2024-07-11 21:42:15.561375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.637 [2024-07-11 21:42:15.561390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.637 [2024-07-11 21:42:15.565716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.637 [2024-07-11 21:42:15.565758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.637 [2024-07-11 21:42:15.565773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.637 [2024-07-11 21:42:15.570001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.637 [2024-07-11 21:42:15.570044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.637 [2024-07-11 21:42:15.570058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.637 [2024-07-11 21:42:15.574339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.637 [2024-07-11 21:42:15.574381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.637 [2024-07-11 21:42:15.574396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.637 [2024-07-11 21:42:15.578690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.637 [2024-07-11 21:42:15.578730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.637 [2024-07-11 21:42:15.578744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.582993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.583034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.583049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.587275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.587317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.587331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.591660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.591702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.591716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.595937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.595979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.595994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.600272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.600315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.600330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.604649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.604692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.604707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.608868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.608925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.613228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.613271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.613286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.617556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.617597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.617611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.621828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.621871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.621885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.626080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.626123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.626137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.630367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.630409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.630423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.634688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.634729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.634743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.638914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.638956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.638970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.643149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.643192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.643208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.895 [2024-07-11 21:42:15.647366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1390050) 00:28:54.895 [2024-07-11 21:42:15.647408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.895 [2024-07-11 21:42:15.647422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.895 00:28:54.895 Latency(us) 00:28:54.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.895 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:54.895 nvme0n1 : 2.00 7139.14 892.39 0.00 0.00 2237.85 1995.87 10724.07 00:28:54.895 =================================================================================================================== 00:28:54.895 Total : 7139.14 892.39 0.00 0.00 2237.85 1995.87 10724.07 00:28:54.895 0 00:28:54.895 21:42:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:54.895 21:42:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:54.895 21:42:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:54.895 21:42:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:54.895 | .driver_specific 00:28:54.895 | .nvme_error 00:28:54.895 | .status_code 00:28:54.895 | .command_transient_transport_error' 00:28:55.154 21:42:15 -- host/digest.sh@71 -- # (( 461 > 0 )) 00:28:55.154 21:42:15 -- host/digest.sh@73 -- # killprocess 84045 00:28:55.154 21:42:15 -- common/autotest_common.sh@926 -- # '[' -z 84045 ']' 00:28:55.154 21:42:15 -- common/autotest_common.sh@930 -- # kill -0 84045 00:28:55.154 21:42:15 -- common/autotest_common.sh@931 -- # uname 00:28:55.154 21:42:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:55.154 21:42:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84045 00:28:55.154 killing process with pid 84045 00:28:55.154 Received shutdown signal, test time was about 2.000000 seconds 00:28:55.154 00:28:55.154 Latency(us) 00:28:55.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.154 =================================================================================================================== 00:28:55.154 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.154 21:42:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:55.154 21:42:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:55.154 21:42:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84045' 00:28:55.154 21:42:15 -- common/autotest_common.sh@945 -- # kill 84045 00:28:55.154 21:42:15 -- common/autotest_common.sh@950 -- # wait 84045 00:28:55.412 21:42:16 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:28:55.412 21:42:16 -- host/digest.sh@54 -- # local rw bs qd 00:28:55.412 21:42:16 -- host/digest.sh@56 -- # rw=randwrite 00:28:55.412 21:42:16 -- host/digest.sh@56 -- # bs=4096 00:28:55.412 21:42:16 -- host/digest.sh@56 -- # qd=128 00:28:55.412 21:42:16 -- host/digest.sh@58 -- # bperfpid=84105 00:28:55.412 21:42:16 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:55.412 21:42:16 -- host/digest.sh@60 -- # waitforlisten 84105 /var/tmp/bperf.sock 00:28:55.412 21:42:16 -- common/autotest_common.sh@819 -- # '[' -z 84105 ']' 00:28:55.412 21:42:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.412 21:42:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:55.412 21:42:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.412 21:42:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:55.412 21:42:16 -- common/autotest_common.sh@10 -- # set +x 00:28:55.412 [2024-07-11 21:42:16.222567] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:55.412 [2024-07-11 21:42:16.222863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84105 ] 00:28:55.412 [2024-07-11 21:42:16.356867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.670 [2024-07-11 21:42:16.444770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.604 21:42:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:56.604 21:42:17 -- common/autotest_common.sh@852 -- # return 0 00:28:56.604 21:42:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.604 21:42:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.604 21:42:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:56.604 21:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.604 21:42:17 -- common/autotest_common.sh@10 -- # set +x 00:28:56.604 21:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.604 21:42:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.604 21:42:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.862 nvme0n1 00:28:56.862 21:42:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:56.862 21:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.862 21:42:17 -- common/autotest_common.sh@10 -- # set +x 00:28:56.862 21:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.862 21:42:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:56.862 21:42:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.120 Running I/O for 2 seconds... 00:28:57.120 [2024-07-11 21:42:17.934688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ddc00 00:28:57.120 [2024-07-11 21:42:17.936060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:17.936107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:17.950855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fef90 00:28:57.120 [2024-07-11 21:42:17.952259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:17.952307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:17.967215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ff3c8 00:28:57.120 [2024-07-11 21:42:17.968633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:17.968680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:17.983131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190feb58 00:28:57.120 [2024-07-11 21:42:17.984431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:17.984473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:17.998847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fe720 00:28:57.120 [2024-07-11 21:42:18.000141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:18.000182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:18.014627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fe2e8 00:28:57.120 [2024-07-11 21:42:18.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:18.015960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:18.030373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fdeb0 00:28:57.120 [2024-07-11 21:42:18.031686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:18.031729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:18.046178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fda78 00:28:57.120 [2024-07-11 21:42:18.047465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:18.047518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:57.120 [2024-07-11 21:42:18.061942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fd640 00:28:57.120 [2024-07-11 21:42:18.063223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.120 [2024-07-11 21:42:18.063268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:57.378 [2024-07-11 21:42:18.078151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fd208 00:28:57.378 [2024-07-11 21:42:18.079495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.378 [2024-07-11 21:42:18.079543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:57.378 [2024-07-11 21:42:18.094410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fcdd0 00:28:57.378 [2024-07-11 21:42:18.095698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.378 [2024-07-11 21:42:18.095744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:57.378 [2024-07-11 21:42:18.110178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fc998 00:28:57.378 [2024-07-11 21:42:18.111414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.378 [2024-07-11 21:42:18.111457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:57.378 [2024-07-11 21:42:18.125977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fc560 00:28:57.378 [2024-07-11 21:42:18.127220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.378 [2024-07-11 21:42:18.127264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:57.378 [2024-07-11 21:42:18.141752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fc128 00:28:57.378 [2024-07-11 21:42:18.142972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.378 [2024-07-11 21:42:18.143015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:57.378 [2024-07-11 21:42:18.157607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fbcf0 00:28:57.378 [2024-07-11 21:42:18.158816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.378 [2024-07-11 21:42:18.158858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.173447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fb8b8 00:28:57.379 [2024-07-11 21:42:18.174672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.174717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.189323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fb480 00:28:57.379 [2024-07-11 21:42:18.190568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.190611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.205260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fb048 00:28:57.379 [2024-07-11 21:42:18.206454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.206520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.221190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fac10 00:28:57.379 [2024-07-11 21:42:18.222370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.222417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.237015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fa7d8 00:28:57.379 [2024-07-11 21:42:18.238176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.238219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.252841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190fa3a0 00:28:57.379 [2024-07-11 21:42:18.253987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.254030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.268645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f9f68 00:28:57.379 [2024-07-11 21:42:18.269791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.269835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.284595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f9b30 00:28:57.379 [2024-07-11 21:42:18.285753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.285798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.300582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f96f8 00:28:57.379 [2024-07-11 21:42:18.301740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.301786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:57.379 [2024-07-11 21:42:18.316878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f92c0 00:28:57.379 [2024-07-11 21:42:18.318056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.379 [2024-07-11 21:42:18.318103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.334189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f8e88 00:28:57.638 [2024-07-11 21:42:18.335361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.335408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.350280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f8a50 00:28:57.638 [2024-07-11 21:42:18.351397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.351443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.366235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f8618 00:28:57.638 [2024-07-11 21:42:18.367370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.367415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.382638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f81e0 00:28:57.638 [2024-07-11 21:42:18.383770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.383818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.398724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f7da8 00:28:57.638 [2024-07-11 21:42:18.399789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.399833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.414764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f7970 00:28:57.638 [2024-07-11 21:42:18.415835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.415880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.431089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f7538 00:28:57.638 [2024-07-11 21:42:18.432163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.432210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.447058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f7100 00:28:57.638 [2024-07-11 21:42:18.448111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.448157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.463063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f6cc8 00:28:57.638 [2024-07-11 21:42:18.464106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.464151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.479024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f6890 00:28:57.638 [2024-07-11 21:42:18.480041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.480084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.494950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f6458 00:28:57.638 [2024-07-11 21:42:18.495969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.496013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.511201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f6020 00:28:57.638 [2024-07-11 21:42:18.512255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.512303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.527245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f5be8 00:28:57.638 [2024-07-11 21:42:18.528255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.543268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f57b0 00:28:57.638 [2024-07-11 21:42:18.544253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.544298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.559182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f5378 00:28:57.638 [2024-07-11 21:42:18.560152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.560196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:57.638 [2024-07-11 21:42:18.575107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f4f40 00:28:57.638 [2024-07-11 21:42:18.576070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.638 [2024-07-11 21:42:18.576113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:57.896 [2024-07-11 21:42:18.590984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f4b08 00:28:57.896 [2024-07-11 21:42:18.591960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.896 [2024-07-11 21:42:18.592006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:57.896 [2024-07-11 21:42:18.607172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f46d0 00:28:57.896 [2024-07-11 21:42:18.608117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.896 [2024-07-11 21:42:18.608161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:57.896 [2024-07-11 21:42:18.622957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f4298 00:28:57.896 [2024-07-11 21:42:18.623879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.896 [2024-07-11 21:42:18.623920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:57.896 [2024-07-11 21:42:18.638964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f3e60 00:28:57.896 [2024-07-11 21:42:18.639942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.896 [2024-07-11 21:42:18.639989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.654957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f3a28 00:28:57.897 [2024-07-11 21:42:18.655864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.670763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f35f0 00:28:57.897 [2024-07-11 21:42:18.671647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.671688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.686538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f31b8 00:28:57.897 [2024-07-11 21:42:18.687403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.687444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.702236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f2d80 00:28:57.897 [2024-07-11 21:42:18.703127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.703169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.718033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f2948 00:28:57.897 [2024-07-11 21:42:18.718919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.718962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.733982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f2510 00:28:57.897 [2024-07-11 21:42:18.734862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.734909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.749968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f20d8 00:28:57.897 [2024-07-11 21:42:18.750840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.750885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.766013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f1ca0 00:28:57.897 [2024-07-11 21:42:18.766903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.766951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.782071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f1868 00:28:57.897 [2024-07-11 21:42:18.782922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.798007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f1430 00:28:57.897 [2024-07-11 21:42:18.798845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.798891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.813946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f0ff8 00:28:57.897 [2024-07-11 21:42:18.814775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.814819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.829878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f0bc0 00:28:57.897 [2024-07-11 21:42:18.830694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-11 21:42:18.830741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:57.897 [2024-07-11 21:42:18.845707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f0788 00:28:58.155 [2024-07-11 21:42:18.846477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.846542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.861610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190f0350 00:28:58.155 [2024-07-11 21:42:18.862387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.862430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.877476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190eff18 00:28:58.155 [2024-07-11 21:42:18.878244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.878288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.893725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190efae0 00:28:58.155 [2024-07-11 21:42:18.894539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.894592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.909945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ef6a8 00:28:58.155 [2024-07-11 21:42:18.910727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.910774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.926081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ef270 00:28:58.155 [2024-07-11 21:42:18.926859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.926898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.942199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190eee38 00:28:58.155 [2024-07-11 21:42:18.942998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.943036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.958244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190eea00 00:28:58.155 [2024-07-11 21:42:18.958981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.959020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.974223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ee5c8 00:28:58.155 [2024-07-11 21:42:18.974957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.975002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:18.990373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ee190 00:28:58.155 [2024-07-11 21:42:18.991111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:18.991149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:19.006408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190edd58 00:28:58.155 [2024-07-11 21:42:19.007145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:19.007186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:19.022441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ed920 00:28:58.155 [2024-07-11 21:42:19.023155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:19.023194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:19.038275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ed4e8 00:28:58.155 [2024-07-11 21:42:19.038946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:19.038984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:19.054095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ed0b0 00:28:58.155 [2024-07-11 21:42:19.054781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:19.054824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:19.070110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ecc78 00:28:58.155 [2024-07-11 21:42:19.070809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:19.070874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:19.086078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ec840 00:28:58.155 [2024-07-11 21:42:19.086732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:19.086772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:58.155 [2024-07-11 21:42:19.102004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ec408 00:28:58.155 [2024-07-11 21:42:19.102659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.155 [2024-07-11 21:42:19.102692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.117973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ebfd0 00:28:58.413 [2024-07-11 21:42:19.118606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.118647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.133864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ebb98 00:28:58.413 [2024-07-11 21:42:19.134520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.134562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.150289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190eb760 00:28:58.413 [2024-07-11 21:42:19.150947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.151002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.166294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190eb328 00:28:58.413 [2024-07-11 21:42:19.166940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.166986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.182383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190eaef0 00:28:58.413 [2024-07-11 21:42:19.183003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.183043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.198303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190eaab8 00:28:58.413 [2024-07-11 21:42:19.198905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.198955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.214162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ea680 00:28:58.413 [2024-07-11 21:42:19.214742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.214778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.230113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190ea248 00:28:58.413 [2024-07-11 21:42:19.230700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.413 [2024-07-11 21:42:19.230746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:58.413 [2024-07-11 21:42:19.246159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e9e10 00:28:58.414 [2024-07-11 21:42:19.246766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.246808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:58.414 [2024-07-11 21:42:19.262378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e99d8 00:28:58.414 [2024-07-11 21:42:19.262949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.263000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:58.414 [2024-07-11 21:42:19.278257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e95a0 00:28:58.414 [2024-07-11 21:42:19.278807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:58.414 [2024-07-11 21:42:19.294244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e9168 00:28:58.414 [2024-07-11 21:42:19.294778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.294816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:58.414 [2024-07-11 21:42:19.310268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e8d30 00:28:58.414 [2024-07-11 21:42:19.310815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.310853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:58.414 [2024-07-11 21:42:19.326366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e88f8 00:28:58.414 [2024-07-11 21:42:19.326900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.326941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:58.414 [2024-07-11 21:42:19.342616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e84c0 00:28:58.414 [2024-07-11 21:42:19.343115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.343155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:58.414 [2024-07-11 21:42:19.359068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e8088 00:28:58.414 [2024-07-11 21:42:19.359587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.414 [2024-07-11 21:42:19.359627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.375393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e7c50 00:28:58.671 [2024-07-11 21:42:19.375876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.375916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.391313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e7818 00:28:58.671 [2024-07-11 21:42:19.391771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.391808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.407236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e73e0 00:28:58.671 [2024-07-11 21:42:19.407678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.407715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.423428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e6fa8 00:28:58.671 [2024-07-11 21:42:19.423884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.423924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.439972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e6b70 00:28:58.671 [2024-07-11 21:42:19.440421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.440461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.456266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e6738 00:28:58.671 [2024-07-11 21:42:19.456696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.456736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.472341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e6300 00:28:58.671 [2024-07-11 21:42:19.472763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.472794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.488297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e5ec8 00:28:58.671 [2024-07-11 21:42:19.488705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.488746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.504212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e5a90 00:28:58.671 [2024-07-11 21:42:19.504608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.504647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.520180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e5658 00:28:58.671 [2024-07-11 21:42:19.520565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.520603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.536075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e5220 00:28:58.671 [2024-07-11 21:42:19.536447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.536497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.552199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e4de8 00:28:58.671 [2024-07-11 21:42:19.552582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.552627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.568626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e49b0 00:28:58.671 [2024-07-11 21:42:19.569006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.569047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.584728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e4578 00:28:58.671 [2024-07-11 21:42:19.585090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.585129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.600843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e4140 00:28:58.671 [2024-07-11 21:42:19.601188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.601230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:58.671 [2024-07-11 21:42:19.616687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e3d08 00:28:58.671 [2024-07-11 21:42:19.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.671 [2024-07-11 21:42:19.617026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.632533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e38d0 00:28:58.929 [2024-07-11 21:42:19.632827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.632864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.648451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e3498 00:28:58.929 [2024-07-11 21:42:19.648768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.648823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.664504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e3060 00:28:58.929 [2024-07-11 21:42:19.664800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.664843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.680465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e2c28 00:28:58.929 [2024-07-11 21:42:19.680762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.680805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.696529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e27f0 00:28:58.929 [2024-07-11 21:42:19.696808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.696852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.712395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e23b8 00:28:58.929 [2024-07-11 21:42:19.712666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.712703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.728327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e1f80 00:28:58.929 [2024-07-11 21:42:19.728599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.728637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.744410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e1b48 00:28:58.929 [2024-07-11 21:42:19.744663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.744695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.760329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e1710 00:28:58.929 [2024-07-11 21:42:19.760565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.760603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.776271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e12d8 00:28:58.929 [2024-07-11 21:42:19.776500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.776538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.792190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e0ea0 00:28:58.929 [2024-07-11 21:42:19.792390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.792432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.808263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e0a68 00:28:58.929 [2024-07-11 21:42:19.808479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.808524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.824273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e0630 00:28:58.929 [2024-07-11 21:42:19.824461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.824506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.840172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190e01f8 00:28:58.929 [2024-07-11 21:42:19.840358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.840388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.856197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190dfdc0 00:28:58.929 [2024-07-11 21:42:19.856366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.856396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:58.929 [2024-07-11 21:42:19.872099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190df988 00:28:58.929 [2024-07-11 21:42:19.872250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.929 [2024-07-11 21:42:19.872276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:59.187 [2024-07-11 21:42:19.887926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190df550 00:28:59.187 [2024-07-11 21:42:19.888063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.187 [2024-07-11 21:42:19.888090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:59.187 [2024-07-11 21:42:19.903722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9a90) with pdu=0x2000190df118 00:28:59.187 [2024-07-11 21:42:19.903840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.187 [2024-07-11 21:42:19.903867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:59.187 00:28:59.187 Latency(us) 00:28:59.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.187 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.187 nvme0n1 : 2.01 15818.20 61.79 0.00 0.00 8083.85 7387.69 22401.40 00:28:59.187 =================================================================================================================== 00:28:59.187 Total : 15818.20 61.79 0.00 0.00 8083.85 7387.69 22401.40 00:28:59.187 0 00:28:59.187 21:42:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.187 21:42:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.187 | .driver_specific 00:28:59.187 | .nvme_error 00:28:59.187 | .status_code 00:28:59.187 | .command_transient_transport_error' 00:28:59.187 21:42:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.187 21:42:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:59.445 21:42:20 -- host/digest.sh@71 -- # (( 124 > 0 )) 00:28:59.445 21:42:20 -- host/digest.sh@73 -- # killprocess 84105 00:28:59.445 21:42:20 -- common/autotest_common.sh@926 -- # '[' -z 84105 ']' 00:28:59.445 21:42:20 -- common/autotest_common.sh@930 -- # kill -0 84105 00:28:59.445 21:42:20 -- common/autotest_common.sh@931 -- # uname 00:28:59.445 21:42:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:59.445 21:42:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84105 00:28:59.445 21:42:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:59.445 killing process with pid 84105 00:28:59.445 21:42:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:59.445 21:42:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84105' 00:28:59.445 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.445 00:28:59.445 Latency(us) 00:28:59.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.445 =================================================================================================================== 00:28:59.445 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.445 21:42:20 -- common/autotest_common.sh@945 -- # kill 84105 00:28:59.445 21:42:20 -- common/autotest_common.sh@950 -- # wait 84105 00:28:59.703 21:42:20 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:28:59.703 21:42:20 -- host/digest.sh@54 -- # local rw bs qd 00:28:59.703 21:42:20 -- host/digest.sh@56 -- # rw=randwrite 00:28:59.703 21:42:20 -- host/digest.sh@56 -- # bs=131072 00:28:59.703 21:42:20 -- host/digest.sh@56 -- # qd=16 00:28:59.703 21:42:20 -- host/digest.sh@58 -- # bperfpid=84160 00:28:59.703 21:42:20 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:59.703 21:42:20 -- host/digest.sh@60 -- # waitforlisten 84160 /var/tmp/bperf.sock 00:28:59.703 21:42:20 -- common/autotest_common.sh@819 -- # '[' -z 84160 ']' 00:28:59.703 21:42:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.703 21:42:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:59.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.703 21:42:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.703 21:42:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:59.703 21:42:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.703 [2024-07-11 21:42:20.458446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:59.703 [2024-07-11 21:42:20.458627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84160 ] 00:28:59.703 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.703 Zero copy mechanism will not be used. 00:28:59.703 [2024-07-11 21:42:20.600935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.962 [2024-07-11 21:42:20.696263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.527 21:42:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.527 21:42:21 -- common/autotest_common.sh@852 -- # return 0 00:29:00.527 21:42:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.527 21:42:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.785 21:42:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:00.785 21:42:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.785 21:42:21 -- common/autotest_common.sh@10 -- # set +x 00:29:00.785 21:42:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.785 21:42:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.785 21:42:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.042 nvme0n1 00:29:01.301 21:42:21 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:01.301 21:42:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:01.301 21:42:21 -- common/autotest_common.sh@10 -- # set +x 00:29:01.301 21:42:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:01.301 21:42:21 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.301 21:42:21 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.301 Zero copy mechanism will not be used. 00:29:01.301 Running I/O for 2 seconds... 00:29:01.301 [2024-07-11 21:42:22.098269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.098648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.098690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.103395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.103738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.103790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.108443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.108784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.108831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.113535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.113854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.113901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.118511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.118830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.118866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.123560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.123880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.123916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.128568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.128882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.128923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.133578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.133897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.133933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.138596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.138931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.138969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.143661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.143987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.144024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.301 [2024-07-11 21:42:22.148664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.301 [2024-07-11 21:42:22.148984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.301 [2024-07-11 21:42:22.149021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.153681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.154000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.154048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.158704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.159028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.159072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.163765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.164084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.164121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.168784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.169102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.169138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.173806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.174129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.174167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.178823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.179144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.179181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.183843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.184169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.188861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.189186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.189222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.193883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.194208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.194245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.198916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.199241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.199278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.203938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.204259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.204296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.208896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.209210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.209247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.213854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.214171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.214209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.218809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.219124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.219163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.223791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.224108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.224144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.228780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.229096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.229132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.233763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.234083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.234119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.238825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.239143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.239180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.243795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.244117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.244158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.302 [2024-07-11 21:42:22.248764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.302 [2024-07-11 21:42:22.249083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.302 [2024-07-11 21:42:22.249120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.561 [2024-07-11 21:42:22.253787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.561 [2024-07-11 21:42:22.254103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.561 [2024-07-11 21:42:22.254140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.561 [2024-07-11 21:42:22.258741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.561 [2024-07-11 21:42:22.259063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.561 [2024-07-11 21:42:22.259100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.561 [2024-07-11 21:42:22.263758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.561 [2024-07-11 21:42:22.264078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.561 [2024-07-11 21:42:22.264117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.561 [2024-07-11 21:42:22.268780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.561 [2024-07-11 21:42:22.269099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.561 [2024-07-11 21:42:22.269143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.561 [2024-07-11 21:42:22.273747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.274070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.274108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.278731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.279056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.279093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.283739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.284062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.284098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.288694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.289014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.289051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.293656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.293979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.294016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.298605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.298924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.298960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.303574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.303896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.303934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.308505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.308829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.308866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.313431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.313764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.313802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.318388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.318733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.318776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.323426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.323764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.323801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.328417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.328755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.328792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.333410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.333739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.333777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.338350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.338681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.338728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.343379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.343710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.343747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.348341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.348670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.348705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.353333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.353663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.353705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.358354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.358691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.358734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.363351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.363681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.363720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.368367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.368700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.368737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.373334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.373660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.373710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.378338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.378672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.378714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.383310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.383652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.383689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.388292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.388625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.562 [2024-07-11 21:42:22.388662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.562 [2024-07-11 21:42:22.393296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.562 [2024-07-11 21:42:22.393627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.393664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.398249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.398599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.398636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.403183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.403516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.403552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.408158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.408475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.408524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.413190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.413530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.413573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.418752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.419082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.419119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.423751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.424073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.424109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.428732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.429054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.429090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.433767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.434089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.434127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.438769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.439106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.439140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.443728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.444052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.444090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.448720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.449039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.449075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.453727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.454047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.454083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.458740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.459062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.459099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.463749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.464068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.464104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.468742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.469074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.469112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.473777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.474099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.474136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.478821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.479141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.479177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.483841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.484158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.484195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.488756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.489072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.489108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.493765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.494085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.494121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.498748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.499081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.499118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.503762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.504078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.504114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-07-11 21:42:22.508656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.563 [2024-07-11 21:42:22.508976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-07-11 21:42:22.509011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.513675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.513992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.514028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.518707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.519024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.519060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.523659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.523980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.524016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.528608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.528936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.528973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.533623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.533939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.533975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.538578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.538897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.538933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.543557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.543876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.543912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.548559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.548876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.548902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.553707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.554021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.554054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.558729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.559061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.559098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.563736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.564064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.564100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.568768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.569084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.569120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.573723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.574046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.574082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.578755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.579089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.579128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.583770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.584088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.584124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.588752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.589069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.589106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.593769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.594090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.594126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.598796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.599119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.599155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.603807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.604130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.604166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.608763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.823 [2024-07-11 21:42:22.609072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-07-11 21:42:22.609118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-07-11 21:42:22.613748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.614064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.614099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.618729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.619051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.619088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.623686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.624004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.624039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.628707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.629025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.629061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.633755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.634072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.634108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.638708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.639028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.639064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.643654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.644009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.648576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.648894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.648930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.653558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.653879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.653914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.658526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.658850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.658887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.663501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.663817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.663853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.668439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.668773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.668810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.673415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.673745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.673781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.678342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.678682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.678720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.683390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.683724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.683760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.688395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.688727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.688765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.693385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.693722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.693760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.698364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.698705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.698748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.703296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.703627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.703663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.708270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.708603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.708639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.713237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.713570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.713606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.718256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.718598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.718635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.723208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.723541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.723576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.728196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.728532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.728569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.733190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.733521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.733563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.738180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.738524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.738559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.743144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.743459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.743506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.748115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.748432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.748468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.753089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.824 [2024-07-11 21:42:22.753415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-07-11 21:42:22.753447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-07-11 21:42:22.758078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.825 [2024-07-11 21:42:22.758394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.825 [2024-07-11 21:42:22.758430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.825 [2024-07-11 21:42:22.763068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.825 [2024-07-11 21:42:22.763382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.825 [2024-07-11 21:42:22.763418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.825 [2024-07-11 21:42:22.768052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:01.825 [2024-07-11 21:42:22.768371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.825 [2024-07-11 21:42:22.768415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.773017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.773331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.773367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.777973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.778290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.778326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.782930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.783245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.783282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.787976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.788292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.788330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.792934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.793254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.793293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.797927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.798248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.798287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.802839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.803158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.803195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.807873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.808192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.808232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.812890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.813200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.813244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.084 [2024-07-11 21:42:22.817868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.084 [2024-07-11 21:42:22.818183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.084 [2024-07-11 21:42:22.818232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.822874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.823205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.823243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.827923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.828239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.828277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.832920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.833246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.833298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.837956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.838281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.838319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.842951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.843278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.843326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.847907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.848235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.848280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.852907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.853204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.853238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.857884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.858203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.858241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.862809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.863126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.863164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.867816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.868136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.868173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.872764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.873084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.873120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.877725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.878045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.878081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.882728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.883054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.883090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.887719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.888040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.888077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.892760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.893080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.893117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.897787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.898110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.898150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.902805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.903122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.903159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.907745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.908068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.908106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.912753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.913055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.913088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.917703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.918022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.918059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.922685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.923009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.923045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.927651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.927968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.928005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.932650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.932968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.933005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.937602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.937915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.937951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.942515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.942837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.942874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.947501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.947801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.947837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.952421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.952766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.952804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.957400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.085 [2024-07-11 21:42:22.957733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.085 [2024-07-11 21:42:22.957770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.085 [2024-07-11 21:42:22.962447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.962797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.962835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:22.967423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.967761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.967799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:22.972366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.972705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.972749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:22.977379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.977715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.977752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:22.982367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.982705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.982747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:22.987384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.987716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.987761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:22.992331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.992683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.992719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:22.997379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:22.997718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:22.997756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:23.002383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:23.002723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:23.002755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:23.007400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:23.007732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:23.007770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:23.012474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:23.012823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:23.012861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:23.017472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:23.017814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:23.017853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:23.022448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:23.022789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:23.022826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:23.027452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:23.027787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:23.027825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.086 [2024-07-11 21:42:23.032427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.086 [2024-07-11 21:42:23.032764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.086 [2024-07-11 21:42:23.032800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.344 [2024-07-11 21:42:23.037445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.344 [2024-07-11 21:42:23.037772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.344 [2024-07-11 21:42:23.037810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.344 [2024-07-11 21:42:23.042412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.344 [2024-07-11 21:42:23.042749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.344 [2024-07-11 21:42:23.042786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.344 [2024-07-11 21:42:23.047367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.344 [2024-07-11 21:42:23.047696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.344 [2024-07-11 21:42:23.047734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.344 [2024-07-11 21:42:23.052278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.344 [2024-07-11 21:42:23.052610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.344 [2024-07-11 21:42:23.052647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.344 [2024-07-11 21:42:23.057285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.057614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.057654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.062284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.062629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.067295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.067626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.067663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.072277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.072609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.072653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.077257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.077594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.077631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.082190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.082530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.082566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.087176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.087504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.087540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.092123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.092438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.092475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.097137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.097455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.097502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.102095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.102415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.102440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.107098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.107410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.107452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.112090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.112407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.112443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.117032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.117349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.117385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.121977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.122291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.122327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.126967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.127275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.127318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.131912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.132223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.132265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.136930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.137249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.137275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.141914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.142233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.142269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.146898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.147211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.147247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.151915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.152245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.152282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.156950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.157272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.157308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.161942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.162263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.162299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.167035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.167348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.167383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.172038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.172353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.172389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.177010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.177336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.177373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.181965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.182290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.182326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.186927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.187249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.187285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.191878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.192196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.192231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.196888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.197197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.197238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.201916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.202249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.202285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.206954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.207286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.207325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.211942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.212257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.212294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.216916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.217231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.217268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.221884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.222206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.222242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.226837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.227151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.227194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.231845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.232165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.232200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.236805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.237120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.237161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.241799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.242118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.242154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.246827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.247144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.247180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.251823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.252150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.252189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.256836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.257155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.257191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.261833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.262153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.262189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.266851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.267174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.267213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.271857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.272180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.272219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.276833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.277150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.277187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.281806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.282125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.282162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.286823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.287150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.287186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.345 [2024-07-11 21:42:23.291845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.345 [2024-07-11 21:42:23.292158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.345 [2024-07-11 21:42:23.292202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.296805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.297120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.297156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.301797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.302117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.302154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.306840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.307166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.307203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.311824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.312140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.312176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.316799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.317114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.317150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.321773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.322088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.322134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.326753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.327074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.327110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.331706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.332022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.332058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.336721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.337038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.337074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.341706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.342025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.342061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.346719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.347054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.347090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.351704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.352020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.356655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.356971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.357012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.361621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.361938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.361974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.366615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.366930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.366965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.371571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.371891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.371927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.376496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.376811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.376847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.381413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.381747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.381783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.386344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.386681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.386751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.391359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.391691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.391727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.396327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.396665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.396702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.401312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.401642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.401678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.406262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.406611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.604 [2024-07-11 21:42:23.406647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.604 [2024-07-11 21:42:23.411214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.604 [2024-07-11 21:42:23.411547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.411583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.416202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.416520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.416554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.421175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.421476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.421520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.426148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.426451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.426495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.431147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.431445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.431478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.436093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.436391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.436417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.441047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.441346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.441379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.445983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.446284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.446318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.450903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.451206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.451240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.455859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.456160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.456195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.460799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.461099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.461140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.465742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.466045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.466079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.470711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.471011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.471045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.475687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.475985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.476018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.480688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.480997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.481031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.485709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.486023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.486057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.490733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.491044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.491078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.495738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.496063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.496099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.500779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.501096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.505818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.506123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.506159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.510811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.511149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.515753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.516055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.516091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.520678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.520981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.521011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.525591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.525891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.525925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.530558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.530868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.530901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.535522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.535825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.535858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.540459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.540775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.540810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.545387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.545703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.545727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-07-11 21:42:23.550353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.605 [2024-07-11 21:42:23.550678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-07-11 21:42:23.550702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.555281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.555593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.555627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.560215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.560542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.560576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.565198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.565525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.565559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.570217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.570550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.570585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.575150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.575448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.575494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.580161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.580464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.580510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.585174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.585477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.585523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.590156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.590458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.590503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.595162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.595461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.595505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.600116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.600418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.600452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.605076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.605384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.605418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.610031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.610335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.610375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.615015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.615312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.615346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.620014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.620313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.620348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.624916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.625217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.625250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.629861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.630161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.630195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.634774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.635075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.635109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-07-11 21:42:23.639717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.864 [2024-07-11 21:42:23.640027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-07-11 21:42:23.640061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.644630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.644930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.644964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.649549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.649862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.649895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.654434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.654757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.654791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.659388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.659702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.664311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.664626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.664660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.669271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.669594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.669623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.674192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.674522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.674558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.679175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.679473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.679517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.684091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.684392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.684426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.688982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.689280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.689314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.693924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.694221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.694254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.698891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.699202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.699235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.703798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.704096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.704129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.708760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.709059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.709092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.713721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.714024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.714058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.718657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.718962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.718996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.723608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.723920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.723955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.728538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.728840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.728873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.733468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.733782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.733815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.738414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.738770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.743321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.743634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.743667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.748246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.748569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.748604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.753193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.753508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.753540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.758137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.758435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.758468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.763103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.763402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.763436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.768049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.865 [2024-07-11 21:42:23.768357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-07-11 21:42:23.768390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.865 [2024-07-11 21:42:23.773000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.773299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.773332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.777902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.778200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.778233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.782859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.783159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.783193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.787834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.788135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.788168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.792747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.793045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.793078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.797640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.797951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.797986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.802584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.802884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.802927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.807524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.807824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.807857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.866 [2024-07-11 21:42:23.812439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:02.866 [2024-07-11 21:42:23.812753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.866 [2024-07-11 21:42:23.812787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.125 [2024-07-11 21:42:23.817376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.125 [2024-07-11 21:42:23.817694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.125 [2024-07-11 21:42:23.817728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.125 [2024-07-11 21:42:23.822309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.125 [2024-07-11 21:42:23.822634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.125 [2024-07-11 21:42:23.822667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.125 [2024-07-11 21:42:23.827294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.125 [2024-07-11 21:42:23.827611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.125 [2024-07-11 21:42:23.827646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.125 [2024-07-11 21:42:23.832305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.125 [2024-07-11 21:42:23.832622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.125 [2024-07-11 21:42:23.832654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.125 [2024-07-11 21:42:23.837225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.125 [2024-07-11 21:42:23.837549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.125 [2024-07-11 21:42:23.837583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.125 [2024-07-11 21:42:23.842190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.842516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.842547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.847150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.847452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.847495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.852119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.852421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.852453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.857036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.857338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.857371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.862034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.862337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.862369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.867000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.867302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.867336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.871906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.872220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.872253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.876859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.877158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.877191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.881820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.882118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.882150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.886763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.887064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.887096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.891739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.892043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.892077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.896723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.897024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.897056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.901700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.902004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.902037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.906661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.906962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.906994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.911624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.911928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.911960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.916584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.916887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.916920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.921609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.921913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.921946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.926571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.926871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.926904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.931508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.931811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.931846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.936441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.936770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.936804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.941381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.941698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.941731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.946340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.946667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.946702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.951245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.951567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.951600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.956205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.956520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.956554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.961220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.961530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.961563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.966223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.966560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.966593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.971227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.971543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.126 [2024-07-11 21:42:23.971575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.126 [2024-07-11 21:42:23.976224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.126 [2024-07-11 21:42:23.976548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:23.976582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:23.981185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:23.981511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:23.981544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:23.986171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:23.986471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:23.986524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:23.991130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:23.991429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:23.991462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:23.996069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:23.996368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:23.996401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.000927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.001229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.001263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.005888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.006190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.006222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.010837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.011141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.011174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.015756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.016055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.016088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.020687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.020988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.021022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.025669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.025968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.026000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.030626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.030933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.030966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.035507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.035808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.035843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.040437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.040762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.040795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.045349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.045671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.045704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.050274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.050597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.050630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.055243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.055557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.060149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.060450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.060495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.065109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.065409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.065441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.127 [2024-07-11 21:42:24.070052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.127 [2024-07-11 21:42:24.070351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.127 [2024-07-11 21:42:24.070384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.385 [2024-07-11 21:42:24.075032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.385 [2024-07-11 21:42:24.075333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.385 [2024-07-11 21:42:24.075367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.385 [2024-07-11 21:42:24.079940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.385 [2024-07-11 21:42:24.080237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.385 [2024-07-11 21:42:24.080270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.385 [2024-07-11 21:42:24.084907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.385 [2024-07-11 21:42:24.085201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.385 [2024-07-11 21:42:24.085236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.385 [2024-07-11 21:42:24.089812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12d9dd0) with pdu=0x2000190fef90 00:29:03.385 [2024-07-11 21:42:24.090120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.385 [2024-07-11 21:42:24.090154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.385 00:29:03.385 Latency(us) 00:29:03.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.385 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:03.385 nvme0n1 : 2.00 6223.78 777.97 0.00 0.00 2565.39 1586.27 5540.77 00:29:03.385 =================================================================================================================== 00:29:03.385 Total : 6223.78 777.97 0.00 0.00 2565.39 1586.27 5540.77 00:29:03.385 0 00:29:03.385 21:42:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.385 21:42:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.385 21:42:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.385 21:42:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.385 | .driver_specific 00:29:03.385 | .nvme_error 00:29:03.385 | .status_code 00:29:03.385 | .command_transient_transport_error' 00:29:03.643 21:42:24 -- host/digest.sh@71 -- # (( 401 > 0 )) 00:29:03.643 21:42:24 -- host/digest.sh@73 -- # killprocess 84160 00:29:03.643 21:42:24 -- common/autotest_common.sh@926 -- # '[' -z 84160 ']' 00:29:03.643 21:42:24 -- common/autotest_common.sh@930 -- # kill -0 84160 00:29:03.643 21:42:24 -- common/autotest_common.sh@931 -- # uname 00:29:03.643 21:42:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.643 21:42:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84160 00:29:03.643 21:42:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:03.643 killing process with pid 84160 00:29:03.643 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.643 00:29:03.643 Latency(us) 00:29:03.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.643 =================================================================================================================== 00:29:03.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.643 21:42:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:03.643 21:42:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84160' 00:29:03.643 21:42:24 -- common/autotest_common.sh@945 -- # kill 84160 00:29:03.643 21:42:24 -- common/autotest_common.sh@950 -- # wait 84160 00:29:03.900 21:42:24 -- host/digest.sh@115 -- # killprocess 83953 00:29:03.900 21:42:24 -- common/autotest_common.sh@926 -- # '[' -z 83953 ']' 00:29:03.900 21:42:24 -- common/autotest_common.sh@930 -- # kill -0 83953 00:29:03.900 21:42:24 -- common/autotest_common.sh@931 -- # uname 00:29:03.900 21:42:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.900 21:42:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83953 00:29:03.900 21:42:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:03.900 killing process with pid 83953 00:29:03.900 21:42:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:03.900 21:42:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83953' 00:29:03.900 21:42:24 -- common/autotest_common.sh@945 -- # kill 83953 00:29:03.900 21:42:24 -- common/autotest_common.sh@950 -- # wait 83953 00:29:04.158 00:29:04.158 real 0m18.493s 00:29:04.158 user 0m35.925s 00:29:04.158 sys 0m4.707s 00:29:04.158 21:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.158 21:42:24 -- common/autotest_common.sh@10 -- # set +x 00:29:04.158 ************************************ 00:29:04.158 END TEST nvmf_digest_error 00:29:04.158 ************************************ 00:29:04.158 21:42:24 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:04.158 21:42:24 -- host/digest.sh@139 -- # nvmftestfini 00:29:04.158 21:42:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:04.158 21:42:24 -- nvmf/common.sh@116 -- # sync 00:29:04.158 21:42:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:04.158 21:42:25 -- nvmf/common.sh@119 -- # set +e 00:29:04.158 21:42:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:04.158 21:42:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:04.158 rmmod nvme_tcp 00:29:04.158 rmmod nvme_fabrics 00:29:04.158 rmmod nvme_keyring 00:29:04.158 21:42:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:04.158 21:42:25 -- nvmf/common.sh@123 -- # set -e 00:29:04.158 21:42:25 -- nvmf/common.sh@124 -- # return 0 00:29:04.158 21:42:25 -- nvmf/common.sh@477 -- # '[' -n 83953 ']' 00:29:04.158 21:42:25 -- nvmf/common.sh@478 -- # killprocess 83953 00:29:04.158 21:42:25 -- common/autotest_common.sh@926 -- # '[' -z 83953 ']' 00:29:04.158 21:42:25 -- common/autotest_common.sh@930 -- # kill -0 83953 00:29:04.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (83953) - No such process 00:29:04.158 Process with pid 83953 is not found 00:29:04.158 21:42:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 83953 is not found' 00:29:04.158 21:42:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:04.158 21:42:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:04.158 21:42:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:04.158 21:42:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.158 21:42:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:04.158 21:42:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.158 21:42:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.158 21:42:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.158 21:42:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:29:04.158 00:29:04.158 real 0m37.010s 00:29:04.158 user 1m10.125s 00:29:04.158 sys 0m9.775s 00:29:04.158 21:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.158 21:42:25 -- common/autotest_common.sh@10 -- # set +x 00:29:04.158 ************************************ 00:29:04.158 END TEST nvmf_digest 00:29:04.158 ************************************ 00:29:04.416 21:42:25 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:04.416 21:42:25 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:29:04.416 21:42:25 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:04.416 21:42:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:04.416 21:42:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.416 21:42:25 -- common/autotest_common.sh@10 -- # set +x 00:29:04.416 ************************************ 00:29:04.416 START TEST nvmf_multipath 00:29:04.416 ************************************ 00:29:04.416 21:42:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:04.416 * Looking for test storage... 00:29:04.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:04.416 21:42:25 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:04.416 21:42:25 -- nvmf/common.sh@7 -- # uname -s 00:29:04.416 21:42:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.416 21:42:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.416 21:42:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.416 21:42:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.416 21:42:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.416 21:42:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.416 21:42:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.416 21:42:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.416 21:42:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.416 21:42:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.416 21:42:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:29:04.417 21:42:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:29:04.417 21:42:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.417 21:42:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.417 21:42:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:04.417 21:42:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:04.417 21:42:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.417 21:42:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.417 21:42:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.417 21:42:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.417 21:42:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.417 21:42:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.417 21:42:25 -- paths/export.sh@5 -- # export PATH 00:29:04.417 21:42:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.417 21:42:25 -- nvmf/common.sh@46 -- # : 0 00:29:04.417 21:42:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:04.417 21:42:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:04.417 21:42:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:04.417 21:42:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.417 21:42:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.417 21:42:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:04.417 21:42:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:04.417 21:42:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:04.417 21:42:25 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:04.417 21:42:25 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:04.417 21:42:25 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:04.417 21:42:25 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:04.417 21:42:25 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:04.417 21:42:25 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:04.417 21:42:25 -- host/multipath.sh@30 -- # nvmftestinit 00:29:04.417 21:42:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:04.417 21:42:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.417 21:42:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:04.417 21:42:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:04.417 21:42:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:04.417 21:42:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.417 21:42:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.417 21:42:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.417 21:42:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:29:04.417 21:42:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:29:04.417 21:42:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:29:04.417 21:42:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:29:04.417 21:42:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:29:04.417 21:42:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:29:04.417 21:42:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.417 21:42:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.417 21:42:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:04.417 21:42:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:29:04.417 21:42:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:04.417 21:42:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:04.417 21:42:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:04.417 21:42:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.417 21:42:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:04.417 21:42:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:04.417 21:42:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:04.417 21:42:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:04.417 21:42:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:29:04.417 21:42:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:29:04.417 Cannot find device "nvmf_tgt_br" 00:29:04.417 21:42:25 -- nvmf/common.sh@154 -- # true 00:29:04.417 21:42:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:29:04.417 Cannot find device "nvmf_tgt_br2" 00:29:04.417 21:42:25 -- nvmf/common.sh@155 -- # true 00:29:04.417 21:42:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:29:04.417 21:42:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:29:04.417 Cannot find device "nvmf_tgt_br" 00:29:04.417 21:42:25 -- nvmf/common.sh@157 -- # true 00:29:04.417 21:42:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:29:04.417 Cannot find device "nvmf_tgt_br2" 00:29:04.417 21:42:25 -- nvmf/common.sh@158 -- # true 00:29:04.417 21:42:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:29:04.417 21:42:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:29:04.417 21:42:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:04.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:04.417 21:42:25 -- nvmf/common.sh@161 -- # true 00:29:04.417 21:42:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:04.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:04.417 21:42:25 -- nvmf/common.sh@162 -- # true 00:29:04.417 21:42:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:29:04.417 21:42:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:04.675 21:42:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:04.675 21:42:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:04.675 21:42:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:04.675 21:42:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:04.675 21:42:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:04.675 21:42:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:04.675 21:42:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:04.675 21:42:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:29:04.675 21:42:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:29:04.675 21:42:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:29:04.675 21:42:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:29:04.675 21:42:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:04.675 21:42:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:04.675 21:42:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:04.675 21:42:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:29:04.675 21:42:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:29:04.676 21:42:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:29:04.676 21:42:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:04.676 21:42:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:04.676 21:42:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:04.676 21:42:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:04.676 21:42:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:29:04.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:29:04.676 00:29:04.676 --- 10.0.0.2 ping statistics --- 00:29:04.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.676 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:29:04.676 21:42:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:29:04.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:04.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:29:04.676 00:29:04.676 --- 10.0.0.3 ping statistics --- 00:29:04.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.676 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:04.676 21:42:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:04.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:29:04.676 00:29:04.676 --- 10.0.0.1 ping statistics --- 00:29:04.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.676 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:29:04.676 21:42:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.676 21:42:25 -- nvmf/common.sh@421 -- # return 0 00:29:04.676 21:42:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:04.676 21:42:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.676 21:42:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:04.676 21:42:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:04.676 21:42:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.676 21:42:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:04.676 21:42:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:04.676 21:42:25 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:29:04.676 21:42:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:04.676 21:42:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:04.676 21:42:25 -- common/autotest_common.sh@10 -- # set +x 00:29:04.676 21:42:25 -- nvmf/common.sh@469 -- # nvmfpid=84432 00:29:04.676 21:42:25 -- nvmf/common.sh@470 -- # waitforlisten 84432 00:29:04.676 21:42:25 -- common/autotest_common.sh@819 -- # '[' -z 84432 ']' 00:29:04.676 21:42:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:04.676 21:42:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.676 21:42:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:04.676 21:42:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.676 21:42:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:04.676 21:42:25 -- common/autotest_common.sh@10 -- # set +x 00:29:04.934 [2024-07-11 21:42:25.635423] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:04.934 [2024-07-11 21:42:25.635544] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.934 [2024-07-11 21:42:25.772511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:04.934 [2024-07-11 21:42:25.867081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:04.934 [2024-07-11 21:42:25.867250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.934 [2024-07-11 21:42:25.867263] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.934 [2024-07-11 21:42:25.867272] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.934 [2024-07-11 21:42:25.867434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.934 [2024-07-11 21:42:25.867445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.869 21:42:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:05.869 21:42:26 -- common/autotest_common.sh@852 -- # return 0 00:29:05.869 21:42:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:05.869 21:42:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:05.869 21:42:26 -- common/autotest_common.sh@10 -- # set +x 00:29:05.869 21:42:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.869 21:42:26 -- host/multipath.sh@33 -- # nvmfapp_pid=84432 00:29:05.869 21:42:26 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:06.126 [2024-07-11 21:42:26.838336] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.126 21:42:26 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:06.384 Malloc0 00:29:06.384 21:42:27 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:06.642 21:42:27 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.643 21:42:27 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.900 [2024-07-11 21:42:27.805197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.900 21:42:27 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:07.158 [2024-07-11 21:42:28.081372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:07.158 21:42:28 -- host/multipath.sh@44 -- # bdevperf_pid=84482 00:29:07.158 21:42:28 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:07.158 21:42:28 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:07.158 21:42:28 -- host/multipath.sh@47 -- # waitforlisten 84482 /var/tmp/bdevperf.sock 00:29:07.158 21:42:28 -- common/autotest_common.sh@819 -- # '[' -z 84482 ']' 00:29:07.158 21:42:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:07.158 21:42:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:07.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:07.158 21:42:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:07.158 21:42:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:07.158 21:42:28 -- common/autotest_common.sh@10 -- # set +x 00:29:08.578 21:42:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:08.578 21:42:29 -- common/autotest_common.sh@852 -- # return 0 00:29:08.578 21:42:29 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:08.578 21:42:29 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:29:08.851 Nvme0n1 00:29:08.851 21:42:29 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:09.109 Nvme0n1 00:29:09.109 21:42:29 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:09.109 21:42:29 -- host/multipath.sh@78 -- # sleep 1 00:29:10.045 21:42:30 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:29:10.045 21:42:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:10.611 21:42:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:10.611 21:42:31 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:29:10.611 21:42:31 -- host/multipath.sh@65 -- # dtrace_pid=84527 00:29:10.611 21:42:31 -- host/multipath.sh@66 -- # sleep 6 00:29:10.611 21:42:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84432 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:17.215 21:42:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:17.215 21:42:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:17.215 21:42:37 -- host/multipath.sh@67 -- # active_port=4421 00:29:17.215 21:42:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:17.215 Attaching 4 probes... 00:29:17.215 @path[10.0.0.2, 4421]: 18284 00:29:17.215 @path[10.0.0.2, 4421]: 18693 00:29:17.215 @path[10.0.0.2, 4421]: 18655 00:29:17.215 @path[10.0.0.2, 4421]: 18690 00:29:17.215 @path[10.0.0.2, 4421]: 18760 00:29:17.215 21:42:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:17.215 21:42:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:17.215 21:42:37 -- host/multipath.sh@69 -- # sed -n 1p 00:29:17.215 21:42:37 -- host/multipath.sh@69 -- # port=4421 00:29:17.215 21:42:37 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:17.215 21:42:37 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:17.215 21:42:37 -- host/multipath.sh@72 -- # kill 84527 00:29:17.215 21:42:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:17.215 21:42:37 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:29:17.215 21:42:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:17.215 21:42:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:17.473 21:42:38 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:29:17.473 21:42:38 -- host/multipath.sh@65 -- # dtrace_pid=84645 00:29:17.473 21:42:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84432 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:17.473 21:42:38 -- host/multipath.sh@66 -- # sleep 6 00:29:24.028 21:42:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:24.028 21:42:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:24.028 21:42:44 -- host/multipath.sh@67 -- # active_port=4420 00:29:24.028 21:42:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:24.028 Attaching 4 probes... 00:29:24.028 @path[10.0.0.2, 4420]: 18695 00:29:24.028 @path[10.0.0.2, 4420]: 18951 00:29:24.028 @path[10.0.0.2, 4420]: 18833 00:29:24.028 @path[10.0.0.2, 4420]: 18899 00:29:24.028 @path[10.0.0.2, 4420]: 18765 00:29:24.028 21:42:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:24.028 21:42:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:24.028 21:42:44 -- host/multipath.sh@69 -- # sed -n 1p 00:29:24.028 21:42:44 -- host/multipath.sh@69 -- # port=4420 00:29:24.028 21:42:44 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:24.028 21:42:44 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:24.028 21:42:44 -- host/multipath.sh@72 -- # kill 84645 00:29:24.028 21:42:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:24.028 21:42:44 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:29:24.028 21:42:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:24.028 21:42:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:24.286 21:42:45 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:29:24.286 21:42:45 -- host/multipath.sh@65 -- # dtrace_pid=84762 00:29:24.286 21:42:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84432 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:24.286 21:42:45 -- host/multipath.sh@66 -- # sleep 6 00:29:30.868 21:42:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:30.868 21:42:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:30.868 21:42:51 -- host/multipath.sh@67 -- # active_port=4421 00:29:30.868 21:42:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:30.868 Attaching 4 probes... 00:29:30.868 @path[10.0.0.2, 4421]: 12305 00:29:30.868 @path[10.0.0.2, 4421]: 16460 00:29:30.868 @path[10.0.0.2, 4421]: 16120 00:29:30.868 @path[10.0.0.2, 4421]: 16810 00:29:30.868 @path[10.0.0.2, 4421]: 18481 00:29:30.868 21:42:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:30.868 21:42:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:30.868 21:42:51 -- host/multipath.sh@69 -- # sed -n 1p 00:29:30.868 21:42:51 -- host/multipath.sh@69 -- # port=4421 00:29:30.868 21:42:51 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:30.868 21:42:51 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:30.868 21:42:51 -- host/multipath.sh@72 -- # kill 84762 00:29:30.868 21:42:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:30.868 21:42:51 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:29:30.868 21:42:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:30.868 21:42:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:31.125 21:42:51 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:29:31.125 21:42:51 -- host/multipath.sh@65 -- # dtrace_pid=84870 00:29:31.125 21:42:51 -- host/multipath.sh@66 -- # sleep 6 00:29:31.125 21:42:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84432 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:37.683 21:42:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:37.683 21:42:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:29:37.683 21:42:58 -- host/multipath.sh@67 -- # active_port= 00:29:37.683 21:42:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:37.683 Attaching 4 probes... 00:29:37.683 00:29:37.683 00:29:37.683 00:29:37.683 00:29:37.683 00:29:37.683 21:42:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:37.683 21:42:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:37.683 21:42:58 -- host/multipath.sh@69 -- # sed -n 1p 00:29:37.683 21:42:58 -- host/multipath.sh@69 -- # port= 00:29:37.683 21:42:58 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:29:37.683 21:42:58 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:29:37.683 21:42:58 -- host/multipath.sh@72 -- # kill 84870 00:29:37.683 21:42:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:37.683 21:42:58 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:29:37.683 21:42:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:37.683 21:42:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:37.941 21:42:58 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:29:37.941 21:42:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84432 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:37.941 21:42:58 -- host/multipath.sh@65 -- # dtrace_pid=84992 00:29:37.941 21:42:58 -- host/multipath.sh@66 -- # sleep 6 00:29:44.521 21:43:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:44.521 21:43:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:44.521 21:43:05 -- host/multipath.sh@67 -- # active_port=4421 00:29:44.521 21:43:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:44.521 Attaching 4 probes... 00:29:44.521 @path[10.0.0.2, 4421]: 17879 00:29:44.521 @path[10.0.0.2, 4421]: 18098 00:29:44.521 @path[10.0.0.2, 4421]: 18179 00:29:44.521 @path[10.0.0.2, 4421]: 18155 00:29:44.521 @path[10.0.0.2, 4421]: 18292 00:29:44.521 21:43:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:44.521 21:43:05 -- host/multipath.sh@69 -- # sed -n 1p 00:29:44.521 21:43:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:44.521 21:43:05 -- host/multipath.sh@69 -- # port=4421 00:29:44.521 21:43:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:44.521 21:43:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:44.521 21:43:05 -- host/multipath.sh@72 -- # kill 84992 00:29:44.521 21:43:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:44.521 21:43:05 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:44.521 [2024-07-11 21:43:05.463973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.521 [2024-07-11 21:43:05.464253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.522 [2024-07-11 21:43:05.464469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaaee0 is same with the state(5) to be set 00:29:44.779 21:43:05 -- host/multipath.sh@101 -- # sleep 1 00:29:45.714 21:43:06 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:29:45.714 21:43:06 -- host/multipath.sh@65 -- # dtrace_pid=85120 00:29:45.714 21:43:06 -- host/multipath.sh@66 -- # sleep 6 00:29:45.714 21:43:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84432 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:52.266 21:43:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:52.266 21:43:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:52.266 21:43:12 -- host/multipath.sh@67 -- # active_port=4420 00:29:52.266 21:43:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:52.266 Attaching 4 probes... 00:29:52.266 @path[10.0.0.2, 4420]: 17876 00:29:52.266 @path[10.0.0.2, 4420]: 18127 00:29:52.266 @path[10.0.0.2, 4420]: 18220 00:29:52.266 @path[10.0.0.2, 4420]: 18144 00:29:52.266 @path[10.0.0.2, 4420]: 18170 00:29:52.266 21:43:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:52.266 21:43:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:52.266 21:43:12 -- host/multipath.sh@69 -- # sed -n 1p 00:29:52.266 21:43:12 -- host/multipath.sh@69 -- # port=4420 00:29:52.266 21:43:12 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:52.266 21:43:12 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:52.266 21:43:12 -- host/multipath.sh@72 -- # kill 85120 00:29:52.266 21:43:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:52.266 21:43:12 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:52.266 [2024-07-11 21:43:13.072642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:52.266 21:43:13 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:52.534 21:43:13 -- host/multipath.sh@111 -- # sleep 6 00:29:59.116 21:43:19 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:29:59.117 21:43:19 -- host/multipath.sh@65 -- # dtrace_pid=85290 00:29:59.117 21:43:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84432 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:59.117 21:43:19 -- host/multipath.sh@66 -- # sleep 6 00:30:05.676 21:43:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:05.676 21:43:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:05.676 21:43:25 -- host/multipath.sh@67 -- # active_port=4421 00:30:05.676 21:43:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:05.676 Attaching 4 probes... 00:30:05.676 @path[10.0.0.2, 4421]: 18019 00:30:05.676 @path[10.0.0.2, 4421]: 18222 00:30:05.676 @path[10.0.0.2, 4421]: 18064 00:30:05.676 @path[10.0.0.2, 4421]: 18176 00:30:05.676 @path[10.0.0.2, 4421]: 18195 00:30:05.676 21:43:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:05.676 21:43:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:05.676 21:43:25 -- host/multipath.sh@69 -- # sed -n 1p 00:30:05.676 21:43:25 -- host/multipath.sh@69 -- # port=4421 00:30:05.676 21:43:25 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:05.676 21:43:25 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:05.676 21:43:25 -- host/multipath.sh@72 -- # kill 85290 00:30:05.676 21:43:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:05.676 21:43:25 -- host/multipath.sh@114 -- # killprocess 84482 00:30:05.676 21:43:25 -- common/autotest_common.sh@926 -- # '[' -z 84482 ']' 00:30:05.676 21:43:25 -- common/autotest_common.sh@930 -- # kill -0 84482 00:30:05.676 21:43:25 -- common/autotest_common.sh@931 -- # uname 00:30:05.676 21:43:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:05.676 21:43:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84482 00:30:05.676 21:43:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:30:05.676 21:43:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:30:05.676 killing process with pid 84482 00:30:05.676 21:43:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84482' 00:30:05.676 21:43:25 -- common/autotest_common.sh@945 -- # kill 84482 00:30:05.676 21:43:25 -- common/autotest_common.sh@950 -- # wait 84482 00:30:05.676 Connection closed with partial response: 00:30:05.676 00:30:05.676 00:30:05.676 21:43:25 -- host/multipath.sh@116 -- # wait 84482 00:30:05.676 21:43:25 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:05.676 [2024-07-11 21:42:28.159801] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:05.676 [2024-07-11 21:42:28.159977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84482 ] 00:30:05.676 [2024-07-11 21:42:28.301329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.676 [2024-07-11 21:42:28.408339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.676 Running I/O for 90 seconds... 00:30:05.676 [2024-07-11 21:42:38.319430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.676 [2024-07-11 21:42:38.319536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.319970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.319994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.320010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.676 [2024-07-11 21:42:38.320045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.320080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.676 [2024-07-11 21:42:38.320117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.320153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.320189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.676 [2024-07-11 21:42:38.320224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.676 [2024-07-11 21:42:38.320261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.676 [2024-07-11 21:42:38.320298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.676 [2024-07-11 21:42:38.320334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:05.676 [2024-07-11 21:42:38.320354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.320369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.320690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.320978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.320993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.677 [2024-07-11 21:42:38.321674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.677 [2024-07-11 21:42:38.321958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.677 [2024-07-11 21:42:38.321973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.321994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.322926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.322962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.322983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.323008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.323089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.323322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.323358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.323431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.678 [2024-07-11 21:42:38.323520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.678 [2024-07-11 21:42:38.323558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:05.678 [2024-07-11 21:42:38.323579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.323602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.323625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.323640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.323661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.323676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.323697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.323717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.323739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.323762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.323784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.323799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.325444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.325505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.325583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.325726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.325762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.325972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.325988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.326009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.326024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.326044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.326059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.326080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:38.326095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:38.326116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:38.326131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.834721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:44.834806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.834866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.834889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.834913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.834928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.834949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:44.834964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.834986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:44.835000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:44.835103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:44.835138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.679 [2024-07-11 21:42:44.835211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:05.679 [2024-07-11 21:42:44.835498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.679 [2024-07-11 21:42:44.835594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.835636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.835672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.835713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.835749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.835785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.835820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.835856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.835897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.835916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.836935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.836972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.836994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.837009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.837030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.837045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.837067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.680 [2024-07-11 21:42:44.837082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.837103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.837118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.837139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.680 [2024-07-11 21:42:44.837154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:05.680 [2024-07-11 21:42:44.837176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.837190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.837226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.837263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.837435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.837564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.837719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.837970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.837986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.681 [2024-07-11 21:42:44.838697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.681 [2024-07-11 21:42:44.838832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:05.681 [2024-07-11 21:42:44.838854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.838869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.838898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.838912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.838934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.838949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.838978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.838993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.839015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.839029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:44.840310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:44.840400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:44.840646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:44.840689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:44.840777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:44.840821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:44.840881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:44.840911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:44.840926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.900798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.682 [2024-07-11 21:42:51.900908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.900944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.900965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.901001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.901024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.901039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.901060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.901075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.901096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.901111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.901132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.901147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.901168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.901182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:05.682 [2024-07-11 21:42:51.901203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.682 [2024-07-11 21:42:51.901218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.901701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.901967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.901988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.683 [2024-07-11 21:42:51.902713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.902963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.683 [2024-07-11 21:42:51.902978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:05.683 [2024-07-11 21:42:51.903000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.903716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.903975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.903996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.904010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.904047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.904081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.904117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.904153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.904189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.904226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.904262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.904298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.904343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.904379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.684 [2024-07-11 21:42:51.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:05.684 [2024-07-11 21:42:51.904437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.684 [2024-07-11 21:42:51.904452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.904473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.904501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.904524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.904540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.904561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.904576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.904598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.904613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.904634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.904649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.904670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.904685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.905742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.905770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.905806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.905823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.905866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.905884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.905913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.905928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.905958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.905973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:42:51.906599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:42:51.906789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.685 [2024-07-11 21:42:51.906805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.464982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.464997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.465011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.465024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.465040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.465053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.465068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.465081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.465096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.465109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.465123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.465136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.465151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.465164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.685 [2024-07-11 21:43:05.465179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.685 [2024-07-11 21:43:05.465194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.465837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.465966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.465985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.686 [2024-07-11 21:43:05.466584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.686 [2024-07-11 21:43:05.466669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.686 [2024-07-11 21:43:05.466685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.466964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.466979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.466992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.467923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.467980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.467995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.468008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.468023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.468036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.468052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.687 [2024-07-11 21:43:05.468065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.468080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.468093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.687 [2024-07-11 21:43:05.468108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.687 [2024-07-11 21:43:05.468121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.688 [2024-07-11 21:43:05.468149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.688 [2024-07-11 21:43:05.468176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.688 [2024-07-11 21:43:05.468267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.688 [2024-07-11 21:43:05.468440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14173d0 is same with the state(5) to be set 00:30:05.688 [2024-07-11 21:43:05.468471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:05.688 [2024-07-11 21:43:05.468491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:05.688 [2024-07-11 21:43:05.468504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58672 len:8 PRP1 0x0 PRP2 0x0 00:30:05.688 [2024-07-11 21:43:05.468518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.688 [2024-07-11 21:43:05.468579] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14173d0 was disconnected and freed. reset controller. 00:30:05.688 [2024-07-11 21:43:05.469728] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:05.688 [2024-07-11 21:43:05.469819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ca920 (9): Bad file descriptor 00:30:05.688 [2024-07-11 21:43:05.470174] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.688 [2024-07-11 21:43:05.470267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.688 [2024-07-11 21:43:05.470321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.688 [2024-07-11 21:43:05.470344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ca920 with addr=10.0.0.2, port=4421 00:30:05.688 [2024-07-11 21:43:05.470360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ca920 is same with the state(5) to be set 00:30:05.688 [2024-07-11 21:43:05.470399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ca920 (9): Bad file descriptor 00:30:05.688 [2024-07-11 21:43:05.470432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:05.688 [2024-07-11 21:43:05.470448] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:05.688 [2024-07-11 21:43:05.470462] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:05.688 [2024-07-11 21:43:05.470510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:05.688 [2024-07-11 21:43:05.470530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:05.688 [2024-07-11 21:43:15.518999] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:05.688 Received shutdown signal, test time was about 55.575881 seconds 00:30:05.688 00:30:05.688 Latency(us) 00:30:05.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.688 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:05.688 Verification LBA range: start 0x0 length 0x4000 00:30:05.688 Nvme0n1 : 55.58 10436.44 40.77 0.00 0.00 12244.67 409.60 7015926.69 00:30:05.688 =================================================================================================================== 00:30:05.688 Total : 10436.44 40.77 0.00 0.00 12244.67 409.60 7015926.69 00:30:05.688 21:43:25 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.688 21:43:26 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:30:05.688 21:43:26 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:05.688 21:43:26 -- host/multipath.sh@125 -- # nvmftestfini 00:30:05.688 21:43:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:05.688 21:43:26 -- nvmf/common.sh@116 -- # sync 00:30:05.688 21:43:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:05.688 21:43:26 -- nvmf/common.sh@119 -- # set +e 00:30:05.688 21:43:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:05.688 21:43:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:05.688 rmmod nvme_tcp 00:30:05.688 rmmod nvme_fabrics 00:30:05.688 rmmod nvme_keyring 00:30:05.688 21:43:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:05.688 21:43:26 -- nvmf/common.sh@123 -- # set -e 00:30:05.688 21:43:26 -- nvmf/common.sh@124 -- # return 0 00:30:05.688 21:43:26 -- nvmf/common.sh@477 -- # '[' -n 84432 ']' 00:30:05.688 21:43:26 -- nvmf/common.sh@478 -- # killprocess 84432 00:30:05.688 21:43:26 -- common/autotest_common.sh@926 -- # '[' -z 84432 ']' 00:30:05.688 21:43:26 -- common/autotest_common.sh@930 -- # kill -0 84432 00:30:05.688 21:43:26 -- common/autotest_common.sh@931 -- # uname 00:30:05.688 21:43:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:05.688 21:43:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84432 00:30:05.688 21:43:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:05.688 21:43:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:05.688 killing process with pid 84432 00:30:05.688 21:43:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84432' 00:30:05.688 21:43:26 -- common/autotest_common.sh@945 -- # kill 84432 00:30:05.688 21:43:26 -- common/autotest_common.sh@950 -- # wait 84432 00:30:05.688 21:43:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:05.688 21:43:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:05.688 21:43:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:05.688 21:43:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.688 21:43:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:05.688 21:43:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.688 21:43:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.688 21:43:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.688 21:43:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:05.688 00:30:05.688 real 1m1.369s 00:30:05.688 user 2m50.111s 00:30:05.688 sys 0m18.587s 00:30:05.688 21:43:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.688 ************************************ 00:30:05.688 END TEST nvmf_multipath 00:30:05.688 ************************************ 00:30:05.688 21:43:26 -- common/autotest_common.sh@10 -- # set +x 00:30:05.688 21:43:26 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:05.688 21:43:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:05.688 21:43:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:05.688 21:43:26 -- common/autotest_common.sh@10 -- # set +x 00:30:05.688 ************************************ 00:30:05.688 START TEST nvmf_timeout 00:30:05.688 ************************************ 00:30:05.688 21:43:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:05.947 * Looking for test storage... 00:30:05.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:05.947 21:43:26 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:05.947 21:43:26 -- nvmf/common.sh@7 -- # uname -s 00:30:05.947 21:43:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.947 21:43:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.947 21:43:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.947 21:43:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.947 21:43:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.947 21:43:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.947 21:43:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.947 21:43:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.947 21:43:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.947 21:43:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.947 21:43:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:30:05.947 21:43:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:30:05.947 21:43:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.947 21:43:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.947 21:43:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:05.947 21:43:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:05.947 21:43:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.947 21:43:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.947 21:43:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.947 21:43:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.947 21:43:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.947 21:43:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.947 21:43:26 -- paths/export.sh@5 -- # export PATH 00:30:05.947 21:43:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.947 21:43:26 -- nvmf/common.sh@46 -- # : 0 00:30:05.947 21:43:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:05.947 21:43:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:05.947 21:43:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:05.947 21:43:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.947 21:43:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.947 21:43:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:05.947 21:43:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:05.947 21:43:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:05.947 21:43:26 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.947 21:43:26 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.947 21:43:26 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:05.947 21:43:26 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:05.947 21:43:26 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:05.947 21:43:26 -- host/timeout.sh@19 -- # nvmftestinit 00:30:05.947 21:43:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:05.947 21:43:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.947 21:43:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:05.947 21:43:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:05.947 21:43:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:05.947 21:43:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.947 21:43:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.947 21:43:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.947 21:43:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:05.947 21:43:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:05.947 21:43:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:05.947 21:43:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:05.947 21:43:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:05.947 21:43:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:05.947 21:43:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.947 21:43:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.947 21:43:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:05.947 21:43:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:05.947 21:43:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:05.947 21:43:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:05.947 21:43:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:05.947 21:43:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.947 21:43:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:05.947 21:43:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:05.947 21:43:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:05.947 21:43:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:05.947 21:43:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:05.947 21:43:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:05.947 Cannot find device "nvmf_tgt_br" 00:30:05.947 21:43:26 -- nvmf/common.sh@154 -- # true 00:30:05.947 21:43:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:05.947 Cannot find device "nvmf_tgt_br2" 00:30:05.947 21:43:26 -- nvmf/common.sh@155 -- # true 00:30:05.947 21:43:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:05.947 21:43:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:05.947 Cannot find device "nvmf_tgt_br" 00:30:05.947 21:43:26 -- nvmf/common.sh@157 -- # true 00:30:05.947 21:43:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:05.947 Cannot find device "nvmf_tgt_br2" 00:30:05.947 21:43:26 -- nvmf/common.sh@158 -- # true 00:30:05.947 21:43:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:05.947 21:43:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:05.947 21:43:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:05.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:05.947 21:43:26 -- nvmf/common.sh@161 -- # true 00:30:05.947 21:43:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:05.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:05.947 21:43:26 -- nvmf/common.sh@162 -- # true 00:30:05.947 21:43:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:05.947 21:43:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:05.947 21:43:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:05.947 21:43:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:05.947 21:43:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:05.947 21:43:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:05.947 21:43:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:05.947 21:43:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:05.947 21:43:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:05.947 21:43:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:05.947 21:43:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:05.947 21:43:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:06.205 21:43:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:06.205 21:43:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:06.205 21:43:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:06.205 21:43:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:06.205 21:43:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:06.205 21:43:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:06.205 21:43:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:06.205 21:43:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:06.205 21:43:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:06.205 21:43:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:06.205 21:43:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:06.205 21:43:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:06.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:30:06.205 00:30:06.205 --- 10.0.0.2 ping statistics --- 00:30:06.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.205 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:30:06.205 21:43:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:06.205 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:06.205 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:30:06.205 00:30:06.205 --- 10.0.0.3 ping statistics --- 00:30:06.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.205 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:06.205 21:43:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:06.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:30:06.205 00:30:06.205 --- 10.0.0.1 ping statistics --- 00:30:06.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.205 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:30:06.205 21:43:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.205 21:43:27 -- nvmf/common.sh@421 -- # return 0 00:30:06.205 21:43:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:06.205 21:43:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.205 21:43:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:06.205 21:43:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:06.205 21:43:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.206 21:43:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:06.206 21:43:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:06.206 21:43:27 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:30:06.206 21:43:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:06.206 21:43:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:06.206 21:43:27 -- common/autotest_common.sh@10 -- # set +x 00:30:06.206 21:43:27 -- nvmf/common.sh@469 -- # nvmfpid=85607 00:30:06.206 21:43:27 -- nvmf/common.sh@470 -- # waitforlisten 85607 00:30:06.206 21:43:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:06.206 21:43:27 -- common/autotest_common.sh@819 -- # '[' -z 85607 ']' 00:30:06.206 21:43:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.206 21:43:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:06.206 21:43:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.206 21:43:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:06.206 21:43:27 -- common/autotest_common.sh@10 -- # set +x 00:30:06.206 [2024-07-11 21:43:27.088275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:06.206 [2024-07-11 21:43:27.088384] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.463 [2024-07-11 21:43:27.228070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:06.463 [2024-07-11 21:43:27.327380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:06.463 [2024-07-11 21:43:27.327557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.463 [2024-07-11 21:43:27.327571] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.463 [2024-07-11 21:43:27.327580] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.463 [2024-07-11 21:43:27.327984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.463 [2024-07-11 21:43:27.328025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.396 21:43:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:07.396 21:43:28 -- common/autotest_common.sh@852 -- # return 0 00:30:07.396 21:43:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:07.396 21:43:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:07.396 21:43:28 -- common/autotest_common.sh@10 -- # set +x 00:30:07.397 21:43:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.397 21:43:28 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.397 21:43:28 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:07.397 [2024-07-11 21:43:28.313210] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.397 21:43:28 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:07.964 Malloc0 00:30:07.964 21:43:28 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:07.964 21:43:28 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.221 21:43:29 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.479 [2024-07-11 21:43:29.388124] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.479 21:43:29 -- host/timeout.sh@32 -- # bdevperf_pid=85656 00:30:08.479 21:43:29 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:08.479 21:43:29 -- host/timeout.sh@34 -- # waitforlisten 85656 /var/tmp/bdevperf.sock 00:30:08.479 21:43:29 -- common/autotest_common.sh@819 -- # '[' -z 85656 ']' 00:30:08.480 21:43:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:08.480 21:43:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:08.480 21:43:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:08.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:08.480 21:43:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:08.480 21:43:29 -- common/autotest_common.sh@10 -- # set +x 00:30:08.738 [2024-07-11 21:43:29.455896] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:08.738 [2024-07-11 21:43:29.455992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85656 ] 00:30:08.738 [2024-07-11 21:43:29.593529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.996 [2024-07-11 21:43:29.689837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.561 21:43:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:09.561 21:43:30 -- common/autotest_common.sh@852 -- # return 0 00:30:09.561 21:43:30 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:09.819 21:43:30 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:10.078 NVMe0n1 00:30:10.078 21:43:30 -- host/timeout.sh@51 -- # rpc_pid=85680 00:30:10.078 21:43:30 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:10.078 21:43:30 -- host/timeout.sh@53 -- # sleep 1 00:30:10.375 Running I/O for 10 seconds... 00:30:11.349 21:43:31 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.349 [2024-07-11 21:43:32.155593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdbea0 is same with the state(5) to be set 00:30:11.349 [2024-07-11 21:43:32.155833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.155865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.155888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.155900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.155912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.155922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.155934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.155943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.155956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.155966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.155978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.155987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.155999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.349 [2024-07-11 21:43:32.156161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.349 [2024-07-11 21:43:32.156260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.349 [2024-07-11 21:43:32.156303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.349 [2024-07-11 21:43:32.156390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.349 [2024-07-11 21:43:32.156411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.349 [2024-07-11 21:43:32.156423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.156887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.156980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.156993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.157003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.157046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.157087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.157281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.350 [2024-07-11 21:43:32.157330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.350 [2024-07-11 21:43:32.157351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.350 [2024-07-11 21:43:32.157362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.157952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.157985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.157994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.158020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.158067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.158089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.351 [2024-07-11 21:43:32.158152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.351 [2024-07-11 21:43:32.158296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.351 [2024-07-11 21:43:32.158306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.352 [2024-07-11 21:43:32.158370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.352 [2024-07-11 21:43:32.158417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.352 [2024-07-11 21:43:32.158543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.352 [2024-07-11 21:43:32.158574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.352 [2024-07-11 21:43:32.158740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x688320 is same with the state(5) to be set 00:30:11.352 [2024-07-11 21:43:32.158763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.352 [2024-07-11 21:43:32.158771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.352 [2024-07-11 21:43:32.158780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118032 len:8 PRP1 0x0 PRP2 0x0 00:30:11.352 [2024-07-11 21:43:32.158790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158849] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x688320 was disconnected and freed. reset controller. 00:30:11.352 [2024-07-11 21:43:32.158933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.352 [2024-07-11 21:43:32.158949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.352 [2024-07-11 21:43:32.158971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.158985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.352 [2024-07-11 21:43:32.158994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.159004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.352 [2024-07-11 21:43:32.159013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.352 [2024-07-11 21:43:32.159023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d3a0 is same with the state(5) to be set 00:30:11.352 [2024-07-11 21:43:32.159247] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.352 [2024-07-11 21:43:32.159271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d3a0 (9): Bad file descriptor 00:30:11.352 [2024-07-11 21:43:32.159388] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.352 [2024-07-11 21:43:32.159452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.352 [2024-07-11 21:43:32.159514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.352 [2024-07-11 21:43:32.159533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d3a0 with addr=10.0.0.2, port=4420 00:30:11.352 [2024-07-11 21:43:32.159550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d3a0 is same with the state(5) to be set 00:30:11.352 [2024-07-11 21:43:32.159570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d3a0 (9): Bad file descriptor 00:30:11.352 [2024-07-11 21:43:32.159586] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.352 [2024-07-11 21:43:32.159595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.352 [2024-07-11 21:43:32.159606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.352 [2024-07-11 21:43:32.159635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.352 [2024-07-11 21:43:32.159647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.352 21:43:32 -- host/timeout.sh@56 -- # sleep 2 00:30:13.333 [2024-07-11 21:43:34.159867] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.333 [2024-07-11 21:43:34.159995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.333 [2024-07-11 21:43:34.160040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.333 [2024-07-11 21:43:34.160057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d3a0 with addr=10.0.0.2, port=4420 00:30:13.333 [2024-07-11 21:43:34.160071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d3a0 is same with the state(5) to be set 00:30:13.333 [2024-07-11 21:43:34.160098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d3a0 (9): Bad file descriptor 00:30:13.333 [2024-07-11 21:43:34.160130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.333 [2024-07-11 21:43:34.160142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.333 [2024-07-11 21:43:34.160154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.333 [2024-07-11 21:43:34.160186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.333 [2024-07-11 21:43:34.160199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.333 21:43:34 -- host/timeout.sh@57 -- # get_controller 00:30:13.333 21:43:34 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:13.333 21:43:34 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:13.590 21:43:34 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:30:13.590 21:43:34 -- host/timeout.sh@58 -- # get_bdev 00:30:13.590 21:43:34 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:13.590 21:43:34 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:13.848 21:43:34 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:13.848 21:43:34 -- host/timeout.sh@61 -- # sleep 5 00:30:15.219 [2024-07-11 21:43:36.160358] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.219 [2024-07-11 21:43:36.160476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.219 [2024-07-11 21:43:36.160539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.219 [2024-07-11 21:43:36.160558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d3a0 with addr=10.0.0.2, port=4420 00:30:15.219 [2024-07-11 21:43:36.160573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d3a0 is same with the state(5) to be set 00:30:15.219 [2024-07-11 21:43:36.160601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d3a0 (9): Bad file descriptor 00:30:15.219 [2024-07-11 21:43:36.160621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.219 [2024-07-11 21:43:36.160631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.219 [2024-07-11 21:43:36.160643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.219 [2024-07-11 21:43:36.160685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.219 [2024-07-11 21:43:36.160698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.747 [2024-07-11 21:43:38.160742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.747 [2024-07-11 21:43:38.160825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.747 [2024-07-11 21:43:38.160838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.747 [2024-07-11 21:43:38.160850] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:30:17.747 [2024-07-11 21:43:38.160882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.314 00:30:18.314 Latency(us) 00:30:18.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.314 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:18.314 Verification LBA range: start 0x0 length 0x4000 00:30:18.314 NVMe0n1 : 8.12 1809.67 7.07 15.76 0.00 70024.82 2829.96 7015926.69 00:30:18.314 =================================================================================================================== 00:30:18.314 Total : 1809.67 7.07 15.76 0.00 70024.82 2829.96 7015926.69 00:30:18.314 0 00:30:18.880 21:43:39 -- host/timeout.sh@62 -- # get_controller 00:30:18.880 21:43:39 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:18.880 21:43:39 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:19.138 21:43:39 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:30:19.138 21:43:39 -- host/timeout.sh@63 -- # get_bdev 00:30:19.138 21:43:39 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:19.138 21:43:39 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:19.395 21:43:40 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:30:19.395 21:43:40 -- host/timeout.sh@65 -- # wait 85680 00:30:19.395 21:43:40 -- host/timeout.sh@67 -- # killprocess 85656 00:30:19.396 21:43:40 -- common/autotest_common.sh@926 -- # '[' -z 85656 ']' 00:30:19.396 21:43:40 -- common/autotest_common.sh@930 -- # kill -0 85656 00:30:19.396 21:43:40 -- common/autotest_common.sh@931 -- # uname 00:30:19.396 21:43:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:19.396 21:43:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85656 00:30:19.396 21:43:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:30:19.396 killing process with pid 85656 00:30:19.396 21:43:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:30:19.396 21:43:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85656' 00:30:19.396 Received shutdown signal, test time was about 9.286217 seconds 00:30:19.396 00:30:19.396 Latency(us) 00:30:19.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.396 =================================================================================================================== 00:30:19.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.396 21:43:40 -- common/autotest_common.sh@945 -- # kill 85656 00:30:19.396 21:43:40 -- common/autotest_common.sh@950 -- # wait 85656 00:30:19.655 21:43:40 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.915 [2024-07-11 21:43:40.832431] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.915 21:43:40 -- host/timeout.sh@74 -- # bdevperf_pid=85800 00:30:19.915 21:43:40 -- host/timeout.sh@76 -- # waitforlisten 85800 /var/tmp/bdevperf.sock 00:30:19.915 21:43:40 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:19.915 21:43:40 -- common/autotest_common.sh@819 -- # '[' -z 85800 ']' 00:30:19.915 21:43:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:19.915 21:43:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:19.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:19.915 21:43:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:19.915 21:43:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:19.915 21:43:40 -- common/autotest_common.sh@10 -- # set +x 00:30:20.173 [2024-07-11 21:43:40.907902] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:20.173 [2024-07-11 21:43:40.908033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85800 ] 00:30:20.173 [2024-07-11 21:43:41.053096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.431 [2024-07-11 21:43:41.145860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.996 21:43:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:20.996 21:43:41 -- common/autotest_common.sh@852 -- # return 0 00:30:20.996 21:43:41 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:21.253 21:43:42 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:30:21.511 NVMe0n1 00:30:21.769 21:43:42 -- host/timeout.sh@84 -- # rpc_pid=85825 00:30:21.769 21:43:42 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:21.769 21:43:42 -- host/timeout.sh@86 -- # sleep 1 00:30:21.769 Running I/O for 10 seconds... 00:30:22.701 21:43:43 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.961 [2024-07-11 21:43:43.733696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.733967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdb930 is same with the state(5) to be set 00:30:22.961 [2024-07-11 21:43:43.734028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.961 [2024-07-11 21:43:43.734557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.961 [2024-07-11 21:43:43.734569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.734934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.734987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.734997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.962 [2024-07-11 21:43:43.735464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.962 [2024-07-11 21:43:43.735495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.962 [2024-07-11 21:43:43.735509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.735935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.735986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.735995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.736024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.736067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.736088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.736109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.736337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.963 [2024-07-11 21:43:43.736391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.963 [2024-07-11 21:43:43.736402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.963 [2024-07-11 21:43:43.736411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.964 [2024-07-11 21:43:43.736505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.964 [2024-07-11 21:43:43.736526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.964 [2024-07-11 21:43:43.736608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.964 [2024-07-11 21:43:43.736628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.964 [2024-07-11 21:43:43.736690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.964 [2024-07-11 21:43:43.736819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3f440 is same with the state(5) to be set 00:30:22.964 [2024-07-11 21:43:43.736842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:22.964 [2024-07-11 21:43:43.736851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:22.964 [2024-07-11 21:43:43.736860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120856 len:8 PRP1 0x0 PRP2 0x0 00:30:22.964 [2024-07-11 21:43:43.736869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.736922] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3f440 was disconnected and freed. reset controller. 00:30:22.964 [2024-07-11 21:43:43.737018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.964 [2024-07-11 21:43:43.737040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.737053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.964 [2024-07-11 21:43:43.737062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.737073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.964 [2024-07-11 21:43:43.737082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.737092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:22.964 [2024-07-11 21:43:43.737101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.964 [2024-07-11 21:43:43.737110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe443a0 is same with the state(5) to be set 00:30:22.964 [2024-07-11 21:43:43.737327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:22.964 [2024-07-11 21:43:43.737348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:22.964 [2024-07-11 21:43:43.737460] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.964 [2024-07-11 21:43:43.737546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.964 [2024-07-11 21:43:43.737591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.964 [2024-07-11 21:43:43.737607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe443a0 with addr=10.0.0.2, port=4420 00:30:22.964 [2024-07-11 21:43:43.737626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe443a0 is same with the state(5) to be set 00:30:22.964 [2024-07-11 21:43:43.737646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:22.964 [2024-07-11 21:43:43.737662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:22.964 [2024-07-11 21:43:43.737672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:22.964 [2024-07-11 21:43:43.737683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:22.964 [2024-07-11 21:43:43.737704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:22.964 [2024-07-11 21:43:43.737715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:22.964 21:43:43 -- host/timeout.sh@90 -- # sleep 1 00:30:23.897 [2024-07-11 21:43:44.737904] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.897 [2024-07-11 21:43:44.738019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.897 [2024-07-11 21:43:44.738064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.897 [2024-07-11 21:43:44.738081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe443a0 with addr=10.0.0.2, port=4420 00:30:23.897 [2024-07-11 21:43:44.738094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe443a0 is same with the state(5) to be set 00:30:23.897 [2024-07-11 21:43:44.738124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:23.897 [2024-07-11 21:43:44.738143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.898 [2024-07-11 21:43:44.738153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.898 [2024-07-11 21:43:44.738164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.898 [2024-07-11 21:43:44.738193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.898 [2024-07-11 21:43:44.738205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.898 21:43:44 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.155 [2024-07-11 21:43:45.023962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.155 21:43:45 -- host/timeout.sh@92 -- # wait 85825 00:30:25.087 [2024-07-11 21:43:45.754331] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:33.194 00:30:33.194 Latency(us) 00:30:33.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.194 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:33.194 Verification LBA range: start 0x0 length 0x4000 00:30:33.194 NVMe0n1 : 10.01 9300.19 36.33 0.00 0.00 13739.67 953.25 3019898.88 00:30:33.194 =================================================================================================================== 00:30:33.194 Total : 9300.19 36.33 0.00 0.00 13739.67 953.25 3019898.88 00:30:33.194 0 00:30:33.194 21:43:52 -- host/timeout.sh@97 -- # rpc_pid=85934 00:30:33.194 21:43:52 -- host/timeout.sh@98 -- # sleep 1 00:30:33.194 21:43:52 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.194 Running I/O for 10 seconds... 00:30:33.194 21:43:53 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.194 [2024-07-11 21:43:53.865304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bda9a0 is same with the state(5) to be set 00:30:33.194 [2024-07-11 21:43:53.865668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.194 [2024-07-11 21:43:53.865698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.194 [2024-07-11 21:43:53.865720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.194 [2024-07-11 21:43:53.865731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.194 [2024-07-11 21:43:53.865743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.194 [2024-07-11 21:43:53.865753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.194 [2024-07-11 21:43:53.865765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.194 [2024-07-11 21:43:53.865774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.194 [2024-07-11 21:43:53.865786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.194 [2024-07-11 21:43:53.865796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.194 [2024-07-11 21:43:53.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.194 [2024-07-11 21:43:53.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.194 [2024-07-11 21:43:53.865830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.194 [2024-07-11 21:43:53.865839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.194 [2024-07-11 21:43:53.865850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.865860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.865871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.865881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.865903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.865913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.865925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.865935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.865946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.865955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.865966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.865976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.865987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.865997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.195 [2024-07-11 21:43:53.866272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.195 [2024-07-11 21:43:53.866316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.195 [2024-07-11 21:43:53.866336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.195 [2024-07-11 21:43:53.866377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.195 [2024-07-11 21:43:53.866400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.195 [2024-07-11 21:43:53.866443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.195 [2024-07-11 21:43:53.866463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.195 [2024-07-11 21:43:53.866731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.195 [2024-07-11 21:43:53.866741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.866763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.866784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.866804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.866825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.866848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.866869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.866890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.866910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.866930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.866951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.866982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.866991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.196 [2024-07-11 21:43:53.867408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.196 [2024-07-11 21:43:53.867590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.196 [2024-07-11 21:43:53.867601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.867723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.867744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.867785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.867826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.867888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.867949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.867977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.867988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.867998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.868019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.868227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:33.197 [2024-07-11 21:43:53.868247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-11 21:43:53.868440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.197 [2024-07-11 21:43:53.868450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe64800 is same with the state(5) to be set 00:30:33.197 [2024-07-11 21:43:53.868467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:33.197 [2024-07-11 21:43:53.868476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:33.198 [2024-07-11 21:43:53.868494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119872 len:8 PRP1 0x0 PRP2 0x0 00:30:33.198 [2024-07-11 21:43:53.868505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.198 [2024-07-11 21:43:53.868558] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe64800 was disconnected and freed. reset controller. 00:30:33.198 [2024-07-11 21:43:53.868798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:33.198 [2024-07-11 21:43:53.868871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:33.198 [2024-07-11 21:43:53.868975] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.198 [2024-07-11 21:43:53.869025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.198 [2024-07-11 21:43:53.869065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.198 [2024-07-11 21:43:53.869081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe443a0 with addr=10.0.0.2, port=4420 00:30:33.198 [2024-07-11 21:43:53.869092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe443a0 is same with the state(5) to be set 00:30:33.198 [2024-07-11 21:43:53.869110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:33.198 [2024-07-11 21:43:53.869125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:33.198 [2024-07-11 21:43:53.869134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:33.198 [2024-07-11 21:43:53.869144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:33.198 [2024-07-11 21:43:53.869166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:33.198 [2024-07-11 21:43:53.869177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:33.198 21:43:53 -- host/timeout.sh@101 -- # sleep 3 00:30:34.128 [2024-07-11 21:43:54.869319] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.128 [2024-07-11 21:43:54.869432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.128 [2024-07-11 21:43:54.869475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.128 [2024-07-11 21:43:54.869506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe443a0 with addr=10.0.0.2, port=4420 00:30:34.128 [2024-07-11 21:43:54.869521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe443a0 is same with the state(5) to be set 00:30:34.128 [2024-07-11 21:43:54.869551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:34.128 [2024-07-11 21:43:54.869570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:34.128 [2024-07-11 21:43:54.869588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:34.128 [2024-07-11 21:43:54.869600] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:34.128 [2024-07-11 21:43:54.869631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:34.128 [2024-07-11 21:43:54.869643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:35.061 [2024-07-11 21:43:55.869794] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.061 [2024-07-11 21:43:55.869907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.061 [2024-07-11 21:43:55.869950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.061 [2024-07-11 21:43:55.869966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe443a0 with addr=10.0.0.2, port=4420 00:30:35.061 [2024-07-11 21:43:55.869980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe443a0 is same with the state(5) to be set 00:30:35.061 [2024-07-11 21:43:55.870008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:35.061 [2024-07-11 21:43:55.870026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:35.061 [2024-07-11 21:43:55.870036] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:35.061 [2024-07-11 21:43:55.870047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:35.061 [2024-07-11 21:43:55.870078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:35.061 [2024-07-11 21:43:55.870090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:36.059 [2024-07-11 21:43:56.872138] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-07-11 21:43:56.872256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-07-11 21:43:56.872299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.059 [2024-07-11 21:43:56.872316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe443a0 with addr=10.0.0.2, port=4420 00:30:36.059 [2024-07-11 21:43:56.872339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe443a0 is same with the state(5) to be set 00:30:36.059 [2024-07-11 21:43:56.872546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe443a0 (9): Bad file descriptor 00:30:36.059 [2024-07-11 21:43:56.872714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:36.059 [2024-07-11 21:43:56.872727] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:36.059 [2024-07-11 21:43:56.872738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:36.059 [2024-07-11 21:43:56.875173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:36.059 [2024-07-11 21:43:56.875203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:36.059 21:43:56 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.316 [2024-07-11 21:43:57.134069] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.316 21:43:57 -- host/timeout.sh@103 -- # wait 85934 00:30:37.249 [2024-07-11 21:43:57.904106] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:42.509 00:30:42.509 Latency(us) 00:30:42.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.509 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:42.509 Verification LBA range: start 0x0 length 0x4000 00:30:42.509 NVMe0n1 : 10.01 7976.39 31.16 5859.39 0.00 9235.86 733.56 3019898.88 00:30:42.509 =================================================================================================================== 00:30:42.509 Total : 7976.39 31.16 5859.39 0.00 9235.86 0.00 3019898.88 00:30:42.509 0 00:30:42.509 21:44:02 -- host/timeout.sh@105 -- # killprocess 85800 00:30:42.509 21:44:02 -- common/autotest_common.sh@926 -- # '[' -z 85800 ']' 00:30:42.509 21:44:02 -- common/autotest_common.sh@930 -- # kill -0 85800 00:30:42.509 21:44:02 -- common/autotest_common.sh@931 -- # uname 00:30:42.509 21:44:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:42.509 21:44:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85800 00:30:42.509 killing process with pid 85800 00:30:42.509 Received shutdown signal, test time was about 10.000000 seconds 00:30:42.509 00:30:42.509 Latency(us) 00:30:42.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.509 =================================================================================================================== 00:30:42.509 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.509 21:44:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:30:42.509 21:44:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:30:42.509 21:44:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85800' 00:30:42.509 21:44:02 -- common/autotest_common.sh@945 -- # kill 85800 00:30:42.509 21:44:02 -- common/autotest_common.sh@950 -- # wait 85800 00:30:42.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:42.509 21:44:02 -- host/timeout.sh@110 -- # bdevperf_pid=86044 00:30:42.509 21:44:02 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:30:42.509 21:44:02 -- host/timeout.sh@112 -- # waitforlisten 86044 /var/tmp/bdevperf.sock 00:30:42.509 21:44:03 -- common/autotest_common.sh@819 -- # '[' -z 86044 ']' 00:30:42.509 21:44:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:42.509 21:44:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:42.509 21:44:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:42.509 21:44:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:42.509 21:44:03 -- common/autotest_common.sh@10 -- # set +x 00:30:42.509 [2024-07-11 21:44:03.038020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:42.509 [2024-07-11 21:44:03.038158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86044 ] 00:30:42.509 [2024-07-11 21:44:03.172025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.509 [2024-07-11 21:44:03.265299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.073 21:44:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:43.074 21:44:03 -- common/autotest_common.sh@852 -- # return 0 00:30:43.074 21:44:03 -- host/timeout.sh@116 -- # dtrace_pid=86060 00:30:43.074 21:44:03 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86044 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:30:43.074 21:44:03 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:30:43.331 21:44:04 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:43.895 NVMe0n1 00:30:43.895 21:44:04 -- host/timeout.sh@124 -- # rpc_pid=86107 00:30:43.895 21:44:04 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:43.895 21:44:04 -- host/timeout.sh@125 -- # sleep 1 00:30:43.895 Running I/O for 10 seconds... 00:30:44.827 21:44:05 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.087 [2024-07-11 21:44:05.836570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.836996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.087 [2024-07-11 21:44:05.837296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92620 is same with the state(5) to be set 00:30:45.088 [2024-07-11 21:44:05.837816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.837846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.837870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.837881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.837893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.837902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.837913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.837921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.837933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.837942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.837953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.837962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.837973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.837982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.837993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.088 [2024-07-11 21:44:05.838211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.088 [2024-07-11 21:44:05.838222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.838980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.838989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.839000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.839008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.839019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.839028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.839038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.839047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.839058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.839074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.839090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.839106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.839118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.839127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.089 [2024-07-11 21:44:05.839138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.089 [2024-07-11 21:44:05.839147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.839986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.090 [2024-07-11 21:44:05.839995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.090 [2024-07-11 21:44:05.840006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.091 [2024-07-11 21:44:05.840516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a320 is same with the state(5) to be set 00:30:45.091 [2024-07-11 21:44:05.840549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.091 [2024-07-11 21:44:05.840557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.091 [2024-07-11 21:44:05.840565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73664 len:8 PRP1 0x0 PRP2 0x0 00:30:45.091 [2024-07-11 21:44:05.840575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.091 [2024-07-11 21:44:05.840629] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e0a320 was disconnected and freed. reset controller. 00:30:45.091 [2024-07-11 21:44:05.840912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.091 [2024-07-11 21:44:05.841007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0f3a0 (9): Bad file descriptor 00:30:45.091 [2024-07-11 21:44:05.841147] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.091 [2024-07-11 21:44:05.841213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.091 [2024-07-11 21:44:05.841256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.091 [2024-07-11 21:44:05.841271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0f3a0 with addr=10.0.0.2, port=4420 00:30:45.091 [2024-07-11 21:44:05.841282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0f3a0 is same with the state(5) to be set 00:30:45.091 [2024-07-11 21:44:05.841301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0f3a0 (9): Bad file descriptor 00:30:45.092 [2024-07-11 21:44:05.841317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.092 [2024-07-11 21:44:05.841327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.092 [2024-07-11 21:44:05.841337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.092 [2024-07-11 21:44:05.841358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.092 [2024-07-11 21:44:05.841369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.092 21:44:05 -- host/timeout.sh@128 -- # wait 86107 00:30:47.008 [2024-07-11 21:44:07.841624] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.008 [2024-07-11 21:44:07.841738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.008 [2024-07-11 21:44:07.841785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.008 [2024-07-11 21:44:07.841802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0f3a0 with addr=10.0.0.2, port=4420 00:30:47.008 [2024-07-11 21:44:07.841818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0f3a0 is same with the state(5) to be set 00:30:47.008 [2024-07-11 21:44:07.841861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0f3a0 (9): Bad file descriptor 00:30:47.008 [2024-07-11 21:44:07.841882] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.008 [2024-07-11 21:44:07.841893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.008 [2024-07-11 21:44:07.841903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.008 [2024-07-11 21:44:07.841934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.008 [2024-07-11 21:44:07.841946] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.906 [2024-07-11 21:44:09.842150] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.906 [2024-07-11 21:44:09.842259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.906 [2024-07-11 21:44:09.842304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.906 [2024-07-11 21:44:09.842321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0f3a0 with addr=10.0.0.2, port=4420 00:30:48.906 [2024-07-11 21:44:09.842335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0f3a0 is same with the state(5) to be set 00:30:48.906 [2024-07-11 21:44:09.842364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0f3a0 (9): Bad file descriptor 00:30:48.906 [2024-07-11 21:44:09.842396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.906 [2024-07-11 21:44:09.842408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.906 [2024-07-11 21:44:09.842419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.906 [2024-07-11 21:44:09.842448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.906 [2024-07-11 21:44:09.842460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.448 [2024-07-11 21:44:11.842544] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.448 [2024-07-11 21:44:11.842630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.448 [2024-07-11 21:44:11.842643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.448 [2024-07-11 21:44:11.842655] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:30:51.448 [2024-07-11 21:44:11.842684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.013 00:30:52.013 Latency(us) 00:30:52.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.013 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:30:52.013 NVMe0n1 : 8.13 2062.05 8.05 15.74 0.00 61502.96 8281.37 7046430.72 00:30:52.013 =================================================================================================================== 00:30:52.013 Total : 2062.05 8.05 15.74 0.00 61502.96 8281.37 7046430.72 00:30:52.013 0 00:30:52.013 21:44:12 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:52.013 Attaching 5 probes... 00:30:52.013 1330.675817: reset bdev controller NVMe0 00:30:52.013 1330.847526: reconnect bdev controller NVMe0 00:30:52.013 3331.180139: reconnect delay bdev controller NVMe0 00:30:52.013 3331.234468: reconnect bdev controller NVMe0 00:30:52.013 5331.763643: reconnect delay bdev controller NVMe0 00:30:52.013 5331.793018: reconnect bdev controller NVMe0 00:30:52.013 7332.287148: reconnect delay bdev controller NVMe0 00:30:52.013 7332.312438: reconnect bdev controller NVMe0 00:30:52.013 21:44:12 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:30:52.013 21:44:12 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:30:52.013 21:44:12 -- host/timeout.sh@136 -- # kill 86060 00:30:52.013 21:44:12 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:52.013 21:44:12 -- host/timeout.sh@139 -- # killprocess 86044 00:30:52.013 21:44:12 -- common/autotest_common.sh@926 -- # '[' -z 86044 ']' 00:30:52.013 21:44:12 -- common/autotest_common.sh@930 -- # kill -0 86044 00:30:52.013 21:44:12 -- common/autotest_common.sh@931 -- # uname 00:30:52.013 21:44:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:52.013 21:44:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86044 00:30:52.013 killing process with pid 86044 00:30:52.013 Received shutdown signal, test time was about 8.180733 seconds 00:30:52.013 00:30:52.013 Latency(us) 00:30:52.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.013 =================================================================================================================== 00:30:52.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.013 21:44:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:30:52.013 21:44:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:30:52.013 21:44:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86044' 00:30:52.013 21:44:12 -- common/autotest_common.sh@945 -- # kill 86044 00:30:52.013 21:44:12 -- common/autotest_common.sh@950 -- # wait 86044 00:30:52.277 21:44:13 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.534 21:44:13 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:30:52.534 21:44:13 -- host/timeout.sh@145 -- # nvmftestfini 00:30:52.534 21:44:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:52.534 21:44:13 -- nvmf/common.sh@116 -- # sync 00:30:52.534 21:44:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:52.534 21:44:13 -- nvmf/common.sh@119 -- # set +e 00:30:52.534 21:44:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:52.534 21:44:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:52.534 rmmod nvme_tcp 00:30:52.534 rmmod nvme_fabrics 00:30:52.534 rmmod nvme_keyring 00:30:52.534 21:44:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:52.534 21:44:13 -- nvmf/common.sh@123 -- # set -e 00:30:52.534 21:44:13 -- nvmf/common.sh@124 -- # return 0 00:30:52.534 21:44:13 -- nvmf/common.sh@477 -- # '[' -n 85607 ']' 00:30:52.534 21:44:13 -- nvmf/common.sh@478 -- # killprocess 85607 00:30:52.534 21:44:13 -- common/autotest_common.sh@926 -- # '[' -z 85607 ']' 00:30:52.534 21:44:13 -- common/autotest_common.sh@930 -- # kill -0 85607 00:30:52.534 21:44:13 -- common/autotest_common.sh@931 -- # uname 00:30:52.534 21:44:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:52.534 21:44:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85607 00:30:52.534 21:44:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:52.534 21:44:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:52.534 killing process with pid 85607 00:30:52.534 21:44:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85607' 00:30:52.534 21:44:13 -- common/autotest_common.sh@945 -- # kill 85607 00:30:52.534 21:44:13 -- common/autotest_common.sh@950 -- # wait 85607 00:30:52.791 21:44:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:52.791 21:44:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:52.791 21:44:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:52.791 21:44:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.791 21:44:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:52.791 21:44:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.791 21:44:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.791 21:44:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.049 21:44:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:53.049 00:30:53.049 real 0m47.181s 00:30:53.049 user 2m18.636s 00:30:53.049 sys 0m5.693s 00:30:53.049 21:44:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.049 ************************************ 00:30:53.049 END TEST nvmf_timeout 00:30:53.049 21:44:13 -- common/autotest_common.sh@10 -- # set +x 00:30:53.049 ************************************ 00:30:53.049 21:44:13 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:30:53.049 21:44:13 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:53.049 21:44:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:53.049 21:44:13 -- common/autotest_common.sh@10 -- # set +x 00:30:53.049 21:44:13 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:53.049 00:30:53.049 real 10m36.820s 00:30:53.049 user 29m51.172s 00:30:53.049 sys 3m17.557s 00:30:53.049 21:44:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.049 21:44:13 -- common/autotest_common.sh@10 -- # set +x 00:30:53.049 ************************************ 00:30:53.049 END TEST nvmf_tcp 00:30:53.049 ************************************ 00:30:53.049 21:44:13 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:30:53.049 21:44:13 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:53.049 21:44:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:53.049 21:44:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:53.049 21:44:13 -- common/autotest_common.sh@10 -- # set +x 00:30:53.049 ************************************ 00:30:53.049 START TEST nvmf_dif 00:30:53.049 ************************************ 00:30:53.049 21:44:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:53.049 * Looking for test storage... 00:30:53.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:53.049 21:44:13 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:53.049 21:44:13 -- nvmf/common.sh@7 -- # uname -s 00:30:53.049 21:44:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.049 21:44:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.049 21:44:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.049 21:44:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.049 21:44:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.049 21:44:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.049 21:44:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.049 21:44:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.049 21:44:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.049 21:44:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.049 21:44:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:30:53.049 21:44:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:30:53.049 21:44:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.049 21:44:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.049 21:44:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:53.049 21:44:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:53.049 21:44:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.049 21:44:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.049 21:44:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.049 21:44:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.049 21:44:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.049 21:44:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.049 21:44:13 -- paths/export.sh@5 -- # export PATH 00:30:53.049 21:44:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.049 21:44:13 -- nvmf/common.sh@46 -- # : 0 00:30:53.049 21:44:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:53.049 21:44:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:53.050 21:44:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:53.050 21:44:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.050 21:44:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.050 21:44:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:53.050 21:44:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:53.050 21:44:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:53.050 21:44:13 -- target/dif.sh@15 -- # NULL_META=16 00:30:53.050 21:44:13 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:53.050 21:44:13 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:53.050 21:44:13 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:53.050 21:44:13 -- target/dif.sh@135 -- # nvmftestinit 00:30:53.050 21:44:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:53.050 21:44:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.050 21:44:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:53.050 21:44:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:53.050 21:44:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:53.050 21:44:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.050 21:44:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:53.050 21:44:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.050 21:44:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:53.050 21:44:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:53.050 21:44:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:53.050 21:44:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:53.050 21:44:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:53.050 21:44:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:53.050 21:44:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.050 21:44:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.050 21:44:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:53.050 21:44:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:53.050 21:44:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:53.050 21:44:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:53.050 21:44:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:53.050 21:44:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.050 21:44:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:53.050 21:44:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:53.050 21:44:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:53.050 21:44:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:53.050 21:44:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:53.050 21:44:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:53.308 Cannot find device "nvmf_tgt_br" 00:30:53.308 21:44:14 -- nvmf/common.sh@154 -- # true 00:30:53.308 21:44:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:53.308 Cannot find device "nvmf_tgt_br2" 00:30:53.308 21:44:14 -- nvmf/common.sh@155 -- # true 00:30:53.308 21:44:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:53.308 21:44:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:53.308 Cannot find device "nvmf_tgt_br" 00:30:53.308 21:44:14 -- nvmf/common.sh@157 -- # true 00:30:53.308 21:44:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:53.308 Cannot find device "nvmf_tgt_br2" 00:30:53.308 21:44:14 -- nvmf/common.sh@158 -- # true 00:30:53.308 21:44:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:53.308 21:44:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:53.308 21:44:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:53.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:53.308 21:44:14 -- nvmf/common.sh@161 -- # true 00:30:53.308 21:44:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:53.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:53.308 21:44:14 -- nvmf/common.sh@162 -- # true 00:30:53.308 21:44:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:53.308 21:44:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:53.308 21:44:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:53.308 21:44:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:53.308 21:44:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:53.308 21:44:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:53.308 21:44:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:53.308 21:44:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:53.308 21:44:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:53.308 21:44:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:53.308 21:44:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:53.308 21:44:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:53.308 21:44:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:53.308 21:44:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:53.308 21:44:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:53.308 21:44:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:53.308 21:44:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:53.308 21:44:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:53.308 21:44:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:53.308 21:44:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:53.308 21:44:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:53.308 21:44:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:53.565 21:44:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:53.565 21:44:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:53.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:30:53.565 00:30:53.565 --- 10.0.0.2 ping statistics --- 00:30:53.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.565 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:30:53.565 21:44:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:53.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:53.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:30:53.565 00:30:53.565 --- 10.0.0.3 ping statistics --- 00:30:53.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.565 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:30:53.565 21:44:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:53.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:30:53.565 00:30:53.565 --- 10.0.0.1 ping statistics --- 00:30:53.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.565 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:30:53.565 21:44:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.565 21:44:14 -- nvmf/common.sh@421 -- # return 0 00:30:53.565 21:44:14 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:53.565 21:44:14 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:53.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:53.822 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:53.822 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:53.822 21:44:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.822 21:44:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:53.822 21:44:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:53.822 21:44:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.822 21:44:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:53.822 21:44:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:53.822 21:44:14 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:53.822 21:44:14 -- target/dif.sh@137 -- # nvmfappstart 00:30:53.822 21:44:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:53.822 21:44:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:53.822 21:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:53.822 21:44:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:53.822 21:44:14 -- nvmf/common.sh@469 -- # nvmfpid=86543 00:30:53.822 21:44:14 -- nvmf/common.sh@470 -- # waitforlisten 86543 00:30:53.822 21:44:14 -- common/autotest_common.sh@819 -- # '[' -z 86543 ']' 00:30:53.822 21:44:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.822 21:44:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:53.822 21:44:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.822 21:44:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:53.822 21:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:53.822 [2024-07-11 21:44:14.745677] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:53.822 [2024-07-11 21:44:14.745781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.079 [2024-07-11 21:44:14.884058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.079 [2024-07-11 21:44:14.979448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:54.079 [2024-07-11 21:44:14.979607] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.079 [2024-07-11 21:44:14.979622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.079 [2024-07-11 21:44:14.979631] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.079 [2024-07-11 21:44:14.979657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.010 21:44:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:55.010 21:44:15 -- common/autotest_common.sh@852 -- # return 0 00:30:55.010 21:44:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:55.010 21:44:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:55.010 21:44:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.010 21:44:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.010 21:44:15 -- target/dif.sh@139 -- # create_transport 00:30:55.010 21:44:15 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:55.010 21:44:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.010 21:44:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.010 [2024-07-11 21:44:15.793812] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.010 21:44:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.010 21:44:15 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:55.010 21:44:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:55.010 21:44:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.010 21:44:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.010 ************************************ 00:30:55.010 START TEST fio_dif_1_default 00:30:55.010 ************************************ 00:30:55.010 21:44:15 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:30:55.010 21:44:15 -- target/dif.sh@86 -- # create_subsystems 0 00:30:55.010 21:44:15 -- target/dif.sh@28 -- # local sub 00:30:55.011 21:44:15 -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.011 21:44:15 -- target/dif.sh@31 -- # create_subsystem 0 00:30:55.011 21:44:15 -- target/dif.sh@18 -- # local sub_id=0 00:30:55.011 21:44:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:55.011 21:44:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.011 21:44:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.011 bdev_null0 00:30:55.011 21:44:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.011 21:44:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:55.011 21:44:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.011 21:44:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.011 21:44:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.011 21:44:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:55.011 21:44:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.011 21:44:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.011 21:44:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.011 21:44:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.011 21:44:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.011 21:44:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.011 [2024-07-11 21:44:15.837968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.011 21:44:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.011 21:44:15 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:55.011 21:44:15 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:55.011 21:44:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:55.011 21:44:15 -- nvmf/common.sh@520 -- # config=() 00:30:55.011 21:44:15 -- nvmf/common.sh@520 -- # local subsystem config 00:30:55.011 21:44:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.011 21:44:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:55.011 21:44:15 -- target/dif.sh@82 -- # gen_fio_conf 00:30:55.011 21:44:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:55.011 { 00:30:55.011 "params": { 00:30:55.011 "name": "Nvme$subsystem", 00:30:55.011 "trtype": "$TEST_TRANSPORT", 00:30:55.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.011 "adrfam": "ipv4", 00:30:55.011 "trsvcid": "$NVMF_PORT", 00:30:55.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.011 "hdgst": ${hdgst:-false}, 00:30:55.011 "ddgst": ${ddgst:-false} 00:30:55.011 }, 00:30:55.011 "method": "bdev_nvme_attach_controller" 00:30:55.011 } 00:30:55.011 EOF 00:30:55.011 )") 00:30:55.011 21:44:15 -- target/dif.sh@54 -- # local file 00:30:55.011 21:44:15 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.011 21:44:15 -- target/dif.sh@56 -- # cat 00:30:55.011 21:44:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:55.011 21:44:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.011 21:44:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:55.011 21:44:15 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:55.011 21:44:15 -- common/autotest_common.sh@1320 -- # shift 00:30:55.011 21:44:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:55.011 21:44:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.011 21:44:15 -- nvmf/common.sh@542 -- # cat 00:30:55.011 21:44:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:55.011 21:44:15 -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:55.011 21:44:15 -- nvmf/common.sh@544 -- # jq . 00:30:55.011 21:44:15 -- nvmf/common.sh@545 -- # IFS=, 00:30:55.011 21:44:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:55.011 "params": { 00:30:55.011 "name": "Nvme0", 00:30:55.011 "trtype": "tcp", 00:30:55.011 "traddr": "10.0.0.2", 00:30:55.011 "adrfam": "ipv4", 00:30:55.011 "trsvcid": "4420", 00:30:55.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.011 "hdgst": false, 00:30:55.011 "ddgst": false 00:30:55.011 }, 00:30:55.011 "method": "bdev_nvme_attach_controller" 00:30:55.011 }' 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:55.011 21:44:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:55.011 21:44:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:55.011 21:44:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:55.011 21:44:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:55.011 21:44:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:55.011 21:44:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.268 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:55.268 fio-3.35 00:30:55.268 Starting 1 thread 00:30:55.525 [2024-07-11 21:44:16.442117] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:55.525 [2024-07-11 21:44:16.442659] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:07.765 00:31:07.765 filename0: (groupid=0, jobs=1): err= 0: pid=86608: Thu Jul 11 21:44:26 2024 00:31:07.765 read: IOPS=8676, BW=33.9MiB/s (35.5MB/s)(339MiB/10001msec) 00:31:07.765 slat (usec): min=6, max=219, avg= 8.60, stdev= 2.99 00:31:07.765 clat (usec): min=387, max=3347, avg=435.85, stdev=36.45 00:31:07.765 lat (usec): min=395, max=3384, avg=444.45, stdev=37.08 00:31:07.765 clat percentiles (usec): 00:31:07.765 | 1.00th=[ 400], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:31:07.765 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:31:07.765 | 70.00th=[ 445], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 469], 00:31:07.765 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 709], 00:31:07.765 | 99.99th=[ 1762] 00:31:07.765 bw ( KiB/s): min=33472, max=35168, per=100.00%, avg=34726.74, stdev=497.07, samples=19 00:31:07.765 iops : min= 8368, max= 8792, avg=8681.68, stdev=124.27, samples=19 00:31:07.765 lat (usec) : 500=98.64%, 750=1.31%, 1000=0.01% 00:31:07.765 lat (msec) : 2=0.02%, 4=0.01% 00:31:07.765 cpu : usr=84.65%, sys=13.34%, ctx=319, majf=0, minf=9 00:31:07.765 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.765 issued rwts: total=86772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.765 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:07.765 00:31:07.765 Run status group 0 (all jobs): 00:31:07.765 READ: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=339MiB (355MB), run=10001-10001msec 00:31:07.765 21:44:26 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:07.765 21:44:26 -- target/dif.sh@43 -- # local sub 00:31:07.765 21:44:26 -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.766 21:44:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:07.766 21:44:26 -- target/dif.sh@36 -- # local sub_id=0 00:31:07.766 21:44:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 00:31:07.766 real 0m10.974s 00:31:07.766 user 0m9.067s 00:31:07.766 sys 0m1.613s 00:31:07.766 21:44:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 ************************************ 00:31:07.766 END TEST fio_dif_1_default 00:31:07.766 ************************************ 00:31:07.766 21:44:26 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:07.766 21:44:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:07.766 21:44:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 ************************************ 00:31:07.766 START TEST fio_dif_1_multi_subsystems 00:31:07.766 ************************************ 00:31:07.766 21:44:26 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:07.766 21:44:26 -- target/dif.sh@92 -- # local files=1 00:31:07.766 21:44:26 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:07.766 21:44:26 -- target/dif.sh@28 -- # local sub 00:31:07.766 21:44:26 -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.766 21:44:26 -- target/dif.sh@31 -- # create_subsystem 0 00:31:07.766 21:44:26 -- target/dif.sh@18 -- # local sub_id=0 00:31:07.766 21:44:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 bdev_null0 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 [2024-07-11 21:44:26.869544] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.766 21:44:26 -- target/dif.sh@31 -- # create_subsystem 1 00:31:07.766 21:44:26 -- target/dif.sh@18 -- # local sub_id=1 00:31:07.766 21:44:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 bdev_null1 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.766 21:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.766 21:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:07.766 21:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.766 21:44:26 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:07.766 21:44:26 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:07.766 21:44:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:07.766 21:44:26 -- nvmf/common.sh@520 -- # config=() 00:31:07.766 21:44:26 -- nvmf/common.sh@520 -- # local subsystem config 00:31:07.766 21:44:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.766 21:44:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:07.766 21:44:26 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.766 21:44:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:07.766 { 00:31:07.766 "params": { 00:31:07.766 "name": "Nvme$subsystem", 00:31:07.766 "trtype": "$TEST_TRANSPORT", 00:31:07.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.766 "adrfam": "ipv4", 00:31:07.766 "trsvcid": "$NVMF_PORT", 00:31:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.766 "hdgst": ${hdgst:-false}, 00:31:07.766 "ddgst": ${ddgst:-false} 00:31:07.766 }, 00:31:07.766 "method": "bdev_nvme_attach_controller" 00:31:07.766 } 00:31:07.766 EOF 00:31:07.766 )") 00:31:07.766 21:44:26 -- target/dif.sh@82 -- # gen_fio_conf 00:31:07.766 21:44:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:07.766 21:44:26 -- target/dif.sh@54 -- # local file 00:31:07.766 21:44:26 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.766 21:44:26 -- target/dif.sh@56 -- # cat 00:31:07.766 21:44:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:07.766 21:44:26 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:07.766 21:44:26 -- common/autotest_common.sh@1320 -- # shift 00:31:07.766 21:44:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:07.766 21:44:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.766 21:44:26 -- nvmf/common.sh@542 -- # cat 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:07.766 21:44:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:07.766 21:44:26 -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:07.766 21:44:26 -- target/dif.sh@73 -- # cat 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:07.766 21:44:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:07.766 21:44:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:07.766 { 00:31:07.766 "params": { 00:31:07.766 "name": "Nvme$subsystem", 00:31:07.766 "trtype": "$TEST_TRANSPORT", 00:31:07.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.766 "adrfam": "ipv4", 00:31:07.766 "trsvcid": "$NVMF_PORT", 00:31:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.766 "hdgst": ${hdgst:-false}, 00:31:07.766 "ddgst": ${ddgst:-false} 00:31:07.766 }, 00:31:07.766 "method": "bdev_nvme_attach_controller" 00:31:07.766 } 00:31:07.766 EOF 00:31:07.766 )") 00:31:07.766 21:44:26 -- target/dif.sh@72 -- # (( file++ )) 00:31:07.766 21:44:26 -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.766 21:44:26 -- nvmf/common.sh@542 -- # cat 00:31:07.766 21:44:26 -- nvmf/common.sh@544 -- # jq . 00:31:07.766 21:44:26 -- nvmf/common.sh@545 -- # IFS=, 00:31:07.766 21:44:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:07.766 "params": { 00:31:07.766 "name": "Nvme0", 00:31:07.766 "trtype": "tcp", 00:31:07.766 "traddr": "10.0.0.2", 00:31:07.766 "adrfam": "ipv4", 00:31:07.766 "trsvcid": "4420", 00:31:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:07.766 "hdgst": false, 00:31:07.766 "ddgst": false 00:31:07.766 }, 00:31:07.766 "method": "bdev_nvme_attach_controller" 00:31:07.766 },{ 00:31:07.766 "params": { 00:31:07.766 "name": "Nvme1", 00:31:07.766 "trtype": "tcp", 00:31:07.766 "traddr": "10.0.0.2", 00:31:07.766 "adrfam": "ipv4", 00:31:07.766 "trsvcid": "4420", 00:31:07.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.766 "hdgst": false, 00:31:07.766 "ddgst": false 00:31:07.766 }, 00:31:07.766 "method": "bdev_nvme_attach_controller" 00:31:07.766 }' 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:07.766 21:44:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:07.766 21:44:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:07.766 21:44:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:07.766 21:44:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:07.766 21:44:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:07.766 21:44:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.766 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:07.766 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:07.766 fio-3.35 00:31:07.766 Starting 2 threads 00:31:07.766 [2024-07-11 21:44:27.596240] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:07.766 [2024-07-11 21:44:27.596349] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:17.731 00:31:17.731 filename0: (groupid=0, jobs=1): err= 0: pid=86768: Thu Jul 11 21:44:37 2024 00:31:17.731 read: IOPS=4362, BW=17.0MiB/s (17.9MB/s)(170MiB/10001msec) 00:31:17.731 slat (nsec): min=7450, max=73934, avg=22816.48, stdev=7136.41 00:31:17.731 clat (usec): min=411, max=2467, avg=857.45, stdev=47.08 00:31:17.731 lat (usec): min=419, max=2489, avg=880.27, stdev=48.44 00:31:17.731 clat percentiles (usec): 00:31:17.731 | 1.00th=[ 750], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 824], 00:31:17.731 | 30.00th=[ 832], 40.00th=[ 848], 50.00th=[ 857], 60.00th=[ 873], 00:31:17.731 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 914], 95.00th=[ 922], 00:31:17.731 | 99.00th=[ 947], 99.50th=[ 955], 99.90th=[ 1045], 99.95th=[ 1303], 00:31:17.731 | 99.99th=[ 1663] 00:31:17.731 bw ( KiB/s): min=17216, max=17664, per=50.05%, avg=17466.95, stdev=91.92, samples=19 00:31:17.731 iops : min= 4304, max= 4416, avg=4366.74, stdev=22.98, samples=19 00:31:17.731 lat (usec) : 500=0.01%, 750=1.01%, 1000=98.84% 00:31:17.731 lat (msec) : 2=0.13%, 4=0.01% 00:31:17.731 cpu : usr=93.49%, sys=5.13%, ctx=10, majf=0, minf=0 00:31:17.731 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.731 issued rwts: total=43632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.731 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:17.731 filename1: (groupid=0, jobs=1): err= 0: pid=86769: Thu Jul 11 21:44:37 2024 00:31:17.731 read: IOPS=4362, BW=17.0MiB/s (17.9MB/s)(170MiB/10001msec) 00:31:17.731 slat (nsec): min=6717, max=84785, avg=22999.11, stdev=7684.40 00:31:17.731 clat (usec): min=686, max=2457, avg=854.22, stdev=43.21 00:31:17.731 lat (usec): min=694, max=2482, avg=877.22, stdev=45.54 00:31:17.731 clat percentiles (usec): 00:31:17.731 | 1.00th=[ 775], 5.00th=[ 791], 10.00th=[ 807], 20.00th=[ 824], 00:31:17.731 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[ 848], 60.00th=[ 865], 00:31:17.731 | 70.00th=[ 873], 80.00th=[ 889], 90.00th=[ 906], 95.00th=[ 922], 00:31:17.731 | 99.00th=[ 947], 99.50th=[ 955], 99.90th=[ 1029], 99.95th=[ 1336], 00:31:17.731 | 99.99th=[ 1647] 00:31:17.731 bw ( KiB/s): min=17216, max=17664, per=50.05%, avg=17466.95, stdev=91.92, samples=19 00:31:17.731 iops : min= 4304, max= 4416, avg=4366.74, stdev=22.98, samples=19 00:31:17.731 lat (usec) : 750=0.07%, 1000=99.81% 00:31:17.731 lat (msec) : 2=0.11%, 4=0.01% 00:31:17.731 cpu : usr=92.59%, sys=6.08%, ctx=17, majf=0, minf=0 00:31:17.731 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.731 issued rwts: total=43628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.731 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:17.731 00:31:17.731 Run status group 0 (all jobs): 00:31:17.731 READ: bw=34.1MiB/s (35.7MB/s), 17.0MiB/s-17.0MiB/s (17.9MB/s-17.9MB/s), io=341MiB (357MB), run=10001-10001msec 00:31:17.731 21:44:37 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:17.731 21:44:37 -- target/dif.sh@43 -- # local sub 00:31:17.731 21:44:37 -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.731 21:44:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:17.731 21:44:37 -- target/dif.sh@36 -- # local sub_id=0 00:31:17.731 21:44:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.731 21:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 21:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 21:44:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:17.731 21:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 21:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 21:44:37 -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.731 21:44:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:17.731 21:44:37 -- target/dif.sh@36 -- # local sub_id=1 00:31:17.731 21:44:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.731 21:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 21:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 21:44:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:17.731 21:44:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 21:44:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 00:31:17.731 real 0m11.116s 00:31:17.731 user 0m19.345s 00:31:17.731 sys 0m1.422s 00:31:17.731 21:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.731 ************************************ 00:31:17.731 END TEST fio_dif_1_multi_subsystems 00:31:17.731 ************************************ 00:31:17.731 21:44:37 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 21:44:38 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:17.731 21:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:17.731 21:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:17.731 21:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 ************************************ 00:31:17.731 START TEST fio_dif_rand_params 00:31:17.731 ************************************ 00:31:17.731 21:44:38 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:17.731 21:44:38 -- target/dif.sh@100 -- # local NULL_DIF 00:31:17.731 21:44:38 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:17.731 21:44:38 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:17.731 21:44:38 -- target/dif.sh@103 -- # bs=128k 00:31:17.731 21:44:38 -- target/dif.sh@103 -- # numjobs=3 00:31:17.731 21:44:38 -- target/dif.sh@103 -- # iodepth=3 00:31:17.731 21:44:38 -- target/dif.sh@103 -- # runtime=5 00:31:17.731 21:44:38 -- target/dif.sh@105 -- # create_subsystems 0 00:31:17.731 21:44:38 -- target/dif.sh@28 -- # local sub 00:31:17.731 21:44:38 -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.731 21:44:38 -- target/dif.sh@31 -- # create_subsystem 0 00:31:17.731 21:44:38 -- target/dif.sh@18 -- # local sub_id=0 00:31:17.731 21:44:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:17.731 21:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 bdev_null0 00:31:17.731 21:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 21:44:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:17.731 21:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 21:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 21:44:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:17.731 21:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 21:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 21:44:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.731 21:44:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.731 21:44:38 -- common/autotest_common.sh@10 -- # set +x 00:31:17.731 [2024-07-11 21:44:38.055904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.731 21:44:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.731 21:44:38 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:17.731 21:44:38 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:17.731 21:44:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:17.731 21:44:38 -- nvmf/common.sh@520 -- # config=() 00:31:17.731 21:44:38 -- nvmf/common.sh@520 -- # local subsystem config 00:31:17.732 21:44:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:17.732 21:44:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.732 21:44:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:17.732 { 00:31:17.732 "params": { 00:31:17.732 "name": "Nvme$subsystem", 00:31:17.732 "trtype": "$TEST_TRANSPORT", 00:31:17.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.732 "adrfam": "ipv4", 00:31:17.732 "trsvcid": "$NVMF_PORT", 00:31:17.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.732 "hdgst": ${hdgst:-false}, 00:31:17.732 "ddgst": ${ddgst:-false} 00:31:17.732 }, 00:31:17.732 "method": "bdev_nvme_attach_controller" 00:31:17.732 } 00:31:17.732 EOF 00:31:17.732 )") 00:31:17.732 21:44:38 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.732 21:44:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:17.732 21:44:38 -- target/dif.sh@82 -- # gen_fio_conf 00:31:17.732 21:44:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:17.732 21:44:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:17.732 21:44:38 -- target/dif.sh@54 -- # local file 00:31:17.732 21:44:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:17.732 21:44:38 -- target/dif.sh@56 -- # cat 00:31:17.732 21:44:38 -- common/autotest_common.sh@1320 -- # shift 00:31:17.732 21:44:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:17.732 21:44:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.732 21:44:38 -- nvmf/common.sh@542 -- # cat 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:17.732 21:44:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:17.732 21:44:38 -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.732 21:44:38 -- nvmf/common.sh@544 -- # jq . 00:31:17.732 21:44:38 -- nvmf/common.sh@545 -- # IFS=, 00:31:17.732 21:44:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:17.732 "params": { 00:31:17.732 "name": "Nvme0", 00:31:17.732 "trtype": "tcp", 00:31:17.732 "traddr": "10.0.0.2", 00:31:17.732 "adrfam": "ipv4", 00:31:17.732 "trsvcid": "4420", 00:31:17.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.732 "hdgst": false, 00:31:17.732 "ddgst": false 00:31:17.732 }, 00:31:17.732 "method": "bdev_nvme_attach_controller" 00:31:17.732 }' 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:17.732 21:44:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:17.732 21:44:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:17.732 21:44:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:17.732 21:44:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:17.732 21:44:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:17.732 21:44:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.732 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:17.732 ... 00:31:17.732 fio-3.35 00:31:17.732 Starting 3 threads 00:31:17.988 [2024-07-11 21:44:38.692761] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:17.989 [2024-07-11 21:44:38.692851] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:23.247 00:31:23.247 filename0: (groupid=0, jobs=1): err= 0: pid=86925: Thu Jul 11 21:44:43 2024 00:31:23.247 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(167MiB/5007msec) 00:31:23.247 slat (nsec): min=7593, max=47351, avg=16521.32, stdev=4380.38 00:31:23.247 clat (usec): min=11065, max=13473, avg=11188.84, stdev=123.99 00:31:23.247 lat (usec): min=11076, max=13497, avg=11205.36, stdev=124.23 00:31:23.247 clat percentiles (usec): 00:31:23.247 | 1.00th=[11076], 5.00th=[11076], 10.00th=[11076], 20.00th=[11207], 00:31:23.247 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11207], 60.00th=[11207], 00:31:23.247 | 70.00th=[11207], 80.00th=[11207], 90.00th=[11207], 95.00th=[11207], 00:31:23.247 | 99.00th=[11469], 99.50th=[11731], 99.90th=[13435], 99.95th=[13435], 00:31:23.247 | 99.99th=[13435] 00:31:23.247 bw ( KiB/s): min=33792, max=34560, per=33.36%, avg=34218.67, stdev=404.77, samples=9 00:31:23.247 iops : min= 264, max= 270, avg=267.33, stdev= 3.16, samples=9 00:31:23.247 lat (msec) : 20=100.00% 00:31:23.247 cpu : usr=91.19%, sys=8.23%, ctx=9, majf=0, minf=9 00:31:23.247 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.247 issued rwts: total=1338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.247 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.247 filename0: (groupid=0, jobs=1): err= 0: pid=86926: Thu Jul 11 21:44:43 2024 00:31:23.247 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(167MiB/5005msec) 00:31:23.247 slat (nsec): min=7612, max=44195, avg=16483.05, stdev=4103.59 00:31:23.247 clat (usec): min=11060, max=11745, avg=11184.22, stdev=65.95 00:31:23.247 lat (usec): min=11069, max=11760, avg=11200.70, stdev=66.61 00:31:23.247 clat percentiles (usec): 00:31:23.247 | 1.00th=[11076], 5.00th=[11076], 10.00th=[11076], 20.00th=[11207], 00:31:23.247 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11207], 60.00th=[11207], 00:31:23.247 | 70.00th=[11207], 80.00th=[11207], 90.00th=[11207], 95.00th=[11207], 00:31:23.247 | 99.00th=[11469], 99.50th=[11731], 99.90th=[11731], 99.95th=[11731], 00:31:23.247 | 99.99th=[11731] 00:31:23.247 bw ( KiB/s): min=33792, max=34560, per=33.36%, avg=34218.67, stdev=404.77, samples=9 00:31:23.247 iops : min= 264, max= 270, avg=267.33, stdev= 3.16, samples=9 00:31:23.247 lat (msec) : 20=100.00% 00:31:23.247 cpu : usr=91.33%, sys=7.79%, ctx=32, majf=0, minf=9 00:31:23.247 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.248 issued rwts: total=1338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.248 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.248 filename0: (groupid=0, jobs=1): err= 0: pid=86927: Thu Jul 11 21:44:43 2024 00:31:23.248 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(167MiB/5009msec) 00:31:23.248 slat (nsec): min=5191, max=45549, avg=15738.74, stdev=4491.75 00:31:23.248 clat (usec): min=11051, max=15132, avg=11194.09, stdev=197.64 00:31:23.248 lat (usec): min=11067, max=15157, avg=11209.83, stdev=197.82 00:31:23.248 clat percentiles (usec): 00:31:23.248 | 1.00th=[11076], 5.00th=[11076], 10.00th=[11076], 20.00th=[11207], 00:31:23.248 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11207], 60.00th=[11207], 00:31:23.248 | 70.00th=[11207], 80.00th=[11207], 90.00th=[11207], 95.00th=[11207], 00:31:23.248 | 99.00th=[11469], 99.50th=[11731], 99.90th=[15139], 99.95th=[15139], 00:31:23.248 | 99.99th=[15139] 00:31:23.248 bw ( KiB/s): min=33792, max=34560, per=33.32%, avg=34176.00, stdev=404.77, samples=10 00:31:23.248 iops : min= 264, max= 270, avg=267.00, stdev= 3.16, samples=10 00:31:23.248 lat (msec) : 20=100.00% 00:31:23.248 cpu : usr=91.13%, sys=8.25%, ctx=4, majf=0, minf=0 00:31:23.248 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.248 issued rwts: total=1338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.248 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.248 00:31:23.248 Run status group 0 (all jobs): 00:31:23.248 READ: bw=100MiB/s (105MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=502MiB (526MB), run=5005-5009msec 00:31:23.248 21:44:44 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:23.248 21:44:44 -- target/dif.sh@43 -- # local sub 00:31:23.248 21:44:44 -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.248 21:44:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:23.248 21:44:44 -- target/dif.sh@36 -- # local sub_id=0 00:31:23.248 21:44:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:23.248 21:44:44 -- target/dif.sh@109 -- # bs=4k 00:31:23.248 21:44:44 -- target/dif.sh@109 -- # numjobs=8 00:31:23.248 21:44:44 -- target/dif.sh@109 -- # iodepth=16 00:31:23.248 21:44:44 -- target/dif.sh@109 -- # runtime= 00:31:23.248 21:44:44 -- target/dif.sh@109 -- # files=2 00:31:23.248 21:44:44 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:23.248 21:44:44 -- target/dif.sh@28 -- # local sub 00:31:23.248 21:44:44 -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.248 21:44:44 -- target/dif.sh@31 -- # create_subsystem 0 00:31:23.248 21:44:44 -- target/dif.sh@18 -- # local sub_id=0 00:31:23.248 21:44:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 bdev_null0 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 [2024-07-11 21:44:44.080199] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.248 21:44:44 -- target/dif.sh@31 -- # create_subsystem 1 00:31:23.248 21:44:44 -- target/dif.sh@18 -- # local sub_id=1 00:31:23.248 21:44:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 bdev_null1 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.248 21:44:44 -- target/dif.sh@31 -- # create_subsystem 2 00:31:23.248 21:44:44 -- target/dif.sh@18 -- # local sub_id=2 00:31:23.248 21:44:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 bdev_null2 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:23.248 21:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.248 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 21:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.248 21:44:44 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:23.248 21:44:44 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:23.248 21:44:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:23.248 21:44:44 -- nvmf/common.sh@520 -- # config=() 00:31:23.248 21:44:44 -- nvmf/common.sh@520 -- # local subsystem config 00:31:23.248 21:44:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:23.248 21:44:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:23.248 { 00:31:23.248 "params": { 00:31:23.248 "name": "Nvme$subsystem", 00:31:23.248 "trtype": "$TEST_TRANSPORT", 00:31:23.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.248 "adrfam": "ipv4", 00:31:23.248 "trsvcid": "$NVMF_PORT", 00:31:23.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.248 "hdgst": ${hdgst:-false}, 00:31:23.248 "ddgst": ${ddgst:-false} 00:31:23.248 }, 00:31:23.248 "method": "bdev_nvme_attach_controller" 00:31:23.248 } 00:31:23.248 EOF 00:31:23.248 )") 00:31:23.248 21:44:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.248 21:44:44 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.248 21:44:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:23.248 21:44:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:23.248 21:44:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:23.248 21:44:44 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:23.248 21:44:44 -- common/autotest_common.sh@1320 -- # shift 00:31:23.248 21:44:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:23.248 21:44:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.248 21:44:44 -- nvmf/common.sh@542 -- # cat 00:31:23.248 21:44:44 -- target/dif.sh@82 -- # gen_fio_conf 00:31:23.248 21:44:44 -- target/dif.sh@54 -- # local file 00:31:23.248 21:44:44 -- target/dif.sh@56 -- # cat 00:31:23.248 21:44:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:23.248 21:44:44 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:23.248 21:44:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:23.248 21:44:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:23.248 21:44:44 -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.248 21:44:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:23.248 21:44:44 -- target/dif.sh@73 -- # cat 00:31:23.248 21:44:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:23.248 { 00:31:23.248 "params": { 00:31:23.248 "name": "Nvme$subsystem", 00:31:23.248 "trtype": "$TEST_TRANSPORT", 00:31:23.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.248 "adrfam": "ipv4", 00:31:23.248 "trsvcid": "$NVMF_PORT", 00:31:23.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.248 "hdgst": ${hdgst:-false}, 00:31:23.248 "ddgst": ${ddgst:-false} 00:31:23.248 }, 00:31:23.248 "method": "bdev_nvme_attach_controller" 00:31:23.248 } 00:31:23.248 EOF 00:31:23.248 )") 00:31:23.248 21:44:44 -- nvmf/common.sh@542 -- # cat 00:31:23.248 21:44:44 -- target/dif.sh@72 -- # (( file++ )) 00:31:23.248 21:44:44 -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.248 21:44:44 -- target/dif.sh@73 -- # cat 00:31:23.248 21:44:44 -- target/dif.sh@72 -- # (( file++ )) 00:31:23.248 21:44:44 -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.248 21:44:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:23.249 21:44:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:23.249 { 00:31:23.249 "params": { 00:31:23.249 "name": "Nvme$subsystem", 00:31:23.249 "trtype": "$TEST_TRANSPORT", 00:31:23.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.249 "adrfam": "ipv4", 00:31:23.249 "trsvcid": "$NVMF_PORT", 00:31:23.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.249 "hdgst": ${hdgst:-false}, 00:31:23.249 "ddgst": ${ddgst:-false} 00:31:23.249 }, 00:31:23.249 "method": "bdev_nvme_attach_controller" 00:31:23.249 } 00:31:23.249 EOF 00:31:23.249 )") 00:31:23.249 21:44:44 -- nvmf/common.sh@542 -- # cat 00:31:23.249 21:44:44 -- nvmf/common.sh@544 -- # jq . 00:31:23.249 21:44:44 -- nvmf/common.sh@545 -- # IFS=, 00:31:23.249 21:44:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:23.249 "params": { 00:31:23.249 "name": "Nvme0", 00:31:23.249 "trtype": "tcp", 00:31:23.249 "traddr": "10.0.0.2", 00:31:23.249 "adrfam": "ipv4", 00:31:23.249 "trsvcid": "4420", 00:31:23.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.249 "hdgst": false, 00:31:23.249 "ddgst": false 00:31:23.249 }, 00:31:23.249 "method": "bdev_nvme_attach_controller" 00:31:23.249 },{ 00:31:23.249 "params": { 00:31:23.249 "name": "Nvme1", 00:31:23.249 "trtype": "tcp", 00:31:23.249 "traddr": "10.0.0.2", 00:31:23.249 "adrfam": "ipv4", 00:31:23.249 "trsvcid": "4420", 00:31:23.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.249 "hdgst": false, 00:31:23.249 "ddgst": false 00:31:23.249 }, 00:31:23.249 "method": "bdev_nvme_attach_controller" 00:31:23.249 },{ 00:31:23.249 "params": { 00:31:23.249 "name": "Nvme2", 00:31:23.249 "trtype": "tcp", 00:31:23.249 "traddr": "10.0.0.2", 00:31:23.249 "adrfam": "ipv4", 00:31:23.249 "trsvcid": "4420", 00:31:23.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:23.249 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:23.249 "hdgst": false, 00:31:23.249 "ddgst": false 00:31:23.249 }, 00:31:23.249 "method": "bdev_nvme_attach_controller" 00:31:23.249 }' 00:31:23.249 21:44:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:23.249 21:44:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:23.249 21:44:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.249 21:44:44 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:23.249 21:44:44 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:23.249 21:44:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:23.507 21:44:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:23.507 21:44:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:23.507 21:44:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:23.507 21:44:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.507 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:23.507 ... 00:31:23.507 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:23.507 ... 00:31:23.507 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:23.507 ... 00:31:23.507 fio-3.35 00:31:23.507 Starting 24 threads 00:31:24.072 [2024-07-11 21:44:44.903860] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:24.072 [2024-07-11 21:44:44.903934] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:36.271 00:31:36.271 filename0: (groupid=0, jobs=1): err= 0: pid=87022: Thu Jul 11 21:44:55 2024 00:31:36.271 read: IOPS=210, BW=841KiB/s (861kB/s)(8432KiB/10030msec) 00:31:36.271 slat (usec): min=4, max=8056, avg=21.05, stdev=195.86 00:31:36.271 clat (msec): min=16, max=156, avg=75.94, stdev=23.89 00:31:36.271 lat (msec): min=16, max=156, avg=75.96, stdev=23.90 00:31:36.271 clat percentiles (msec): 00:31:36.271 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:31:36.271 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:31:36.271 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 121], 00:31:36.271 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:31:36.271 | 99.99th=[ 157] 00:31:36.271 bw ( KiB/s): min= 416, max= 1129, per=3.93%, avg=838.85, stdev=176.00, samples=20 00:31:36.271 iops : min= 104, max= 282, avg=209.70, stdev=43.98, samples=20 00:31:36.271 lat (msec) : 20=0.76%, 50=14.14%, 100=69.54%, 250=15.56% 00:31:36.271 cpu : usr=34.22%, sys=1.43%, ctx=1097, majf=0, minf=9 00:31:36.271 IO depths : 1=0.1%, 2=1.9%, 4=7.8%, 8=74.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:36.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 complete : 0=0.0%, 4=89.7%, 8=8.6%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.271 filename0: (groupid=0, jobs=1): err= 0: pid=87023: Thu Jul 11 21:44:55 2024 00:31:36.271 read: IOPS=226, BW=905KiB/s (927kB/s)(9072KiB/10022msec) 00:31:36.271 slat (usec): min=4, max=12025, avg=29.75, stdev=285.08 00:31:36.271 clat (msec): min=21, max=156, avg=70.54, stdev=20.48 00:31:36.271 lat (msec): min=21, max=156, avg=70.57, stdev=20.48 00:31:36.271 clat percentiles (msec): 00:31:36.271 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:31:36.271 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:31:36.271 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 111], 00:31:36.271 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 150], 99.95th=[ 157], 00:31:36.271 | 99.99th=[ 157] 00:31:36.271 bw ( KiB/s): min= 632, max= 1000, per=4.22%, avg=900.55, stdev=119.96, samples=20 00:31:36.271 iops : min= 158, max= 250, avg=225.10, stdev=30.03, samples=20 00:31:36.271 lat (msec) : 50=18.87%, 100=70.46%, 250=10.67% 00:31:36.271 cpu : usr=42.37%, sys=1.99%, ctx=1102, majf=0, minf=9 00:31:36.271 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=80.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:36.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.271 filename0: (groupid=0, jobs=1): err= 0: pid=87024: Thu Jul 11 21:44:55 2024 00:31:36.271 read: IOPS=213, BW=855KiB/s (875kB/s)(8580KiB/10037msec) 00:31:36.271 slat (usec): min=7, max=8037, avg=37.36, stdev=345.83 00:31:36.271 clat (msec): min=13, max=158, avg=74.64, stdev=22.34 00:31:36.271 lat (msec): min=13, max=158, avg=74.67, stdev=22.34 00:31:36.271 clat percentiles (msec): 00:31:36.271 | 1.00th=[ 40], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:31:36.271 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:31:36.271 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 118], 00:31:36.271 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 159], 00:31:36.271 | 99.99th=[ 159] 00:31:36.271 bw ( KiB/s): min= 544, max= 1129, per=3.99%, avg=852.45, stdev=150.51, samples=20 00:31:36.271 iops : min= 136, max= 282, avg=213.10, stdev=37.60, samples=20 00:31:36.271 lat (msec) : 20=0.75%, 50=13.89%, 100=71.98%, 250=13.38% 00:31:36.271 cpu : usr=44.60%, sys=2.06%, ctx=1537, majf=0, minf=9 00:31:36.271 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=75.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:31:36.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 complete : 0=0.0%, 4=89.5%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 issued rwts: total=2145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.271 filename0: (groupid=0, jobs=1): err= 0: pid=87025: Thu Jul 11 21:44:55 2024 00:31:36.271 read: IOPS=223, BW=893KiB/s (914kB/s)(8956KiB/10033msec) 00:31:36.271 slat (usec): min=7, max=8026, avg=22.39, stdev=189.59 00:31:36.271 clat (msec): min=19, max=155, avg=71.52, stdev=20.49 00:31:36.271 lat (msec): min=19, max=155, avg=71.55, stdev=20.50 00:31:36.271 clat percentiles (msec): 00:31:36.271 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:31:36.271 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:31:36.271 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 113], 00:31:36.271 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 140], 99.95th=[ 146], 00:31:36.271 | 99.99th=[ 157] 00:31:36.271 bw ( KiB/s): min= 608, max= 1129, per=4.17%, avg=891.25, stdev=138.66, samples=20 00:31:36.271 iops : min= 152, max= 282, avg=222.80, stdev=34.64, samples=20 00:31:36.271 lat (msec) : 20=0.18%, 50=16.39%, 100=71.73%, 250=11.70% 00:31:36.271 cpu : usr=39.05%, sys=1.64%, ctx=1066, majf=0, minf=9 00:31:36.271 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:31:36.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.271 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.271 filename0: (groupid=0, jobs=1): err= 0: pid=87026: Thu Jul 11 21:44:55 2024 00:31:36.271 read: IOPS=222, BW=890KiB/s (911kB/s)(8940KiB/10048msec) 00:31:36.271 slat (nsec): min=4803, max=56817, avg=17441.89, stdev=8716.49 00:31:36.271 clat (usec): min=1695, max=138944, avg=71778.70, stdev=22935.81 00:31:36.271 lat (usec): min=1706, max=138955, avg=71796.15, stdev=22935.31 00:31:36.271 clat percentiles (msec): 00:31:36.271 | 1.00th=[ 5], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 57], 00:31:36.271 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:31:36.271 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 111], 00:31:36.271 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 133], 00:31:36.271 | 99.99th=[ 140] 00:31:36.271 bw ( KiB/s): min= 584, max= 1536, per=4.16%, avg=887.40, stdev=190.77, samples=20 00:31:36.271 iops : min= 146, max= 384, avg=221.85, stdev=47.69, samples=20 00:31:36.271 lat (msec) : 2=0.09%, 4=0.63%, 10=2.06%, 20=0.72%, 50=11.81% 00:31:36.271 lat (msec) : 100=72.35%, 250=12.35% 00:31:36.271 cpu : usr=35.48%, sys=1.64%, ctx=1059, majf=0, minf=0 00:31:36.271 IO depths : 1=0.1%, 2=1.0%, 4=3.7%, 8=78.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:31:36.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.271 complete : 0=0.0%, 4=88.8%, 8=10.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename0: (groupid=0, jobs=1): err= 0: pid=87027: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=223, BW=892KiB/s (914kB/s)(8948KiB/10026msec) 00:31:36.272 slat (usec): min=5, max=12038, avg=34.22, stdev=397.31 00:31:36.272 clat (msec): min=23, max=168, avg=71.44, stdev=21.04 00:31:36.272 lat (msec): min=23, max=168, avg=71.47, stdev=21.03 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:31:36.272 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:31:36.272 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 111], 00:31:36.272 | 99.00th=[ 131], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 169], 00:31:36.272 | 99.99th=[ 169] 00:31:36.272 bw ( KiB/s): min= 632, max= 1096, per=4.17%, avg=890.85, stdev=125.14, samples=20 00:31:36.272 iops : min= 158, max= 274, avg=222.70, stdev=31.28, samples=20 00:31:36.272 lat (msec) : 50=18.37%, 100=69.96%, 250=11.67% 00:31:36.272 cpu : usr=36.71%, sys=1.20%, ctx=976, majf=0, minf=9 00:31:36.272 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename0: (groupid=0, jobs=1): err= 0: pid=87028: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=212, BW=851KiB/s (871kB/s)(8540KiB/10038msec) 00:31:36.272 slat (usec): min=7, max=8022, avg=21.56, stdev=180.52 00:31:36.272 clat (msec): min=15, max=164, avg=75.05, stdev=21.65 00:31:36.272 lat (msec): min=15, max=164, avg=75.08, stdev=21.65 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 59], 00:31:36.272 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:31:36.272 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 117], 00:31:36.272 | 99.00th=[ 126], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 165], 00:31:36.272 | 99.99th=[ 165] 00:31:36.272 bw ( KiB/s): min= 512, max= 1129, per=3.97%, avg=847.25, stdev=159.50, samples=20 00:31:36.272 iops : min= 128, max= 282, avg=211.80, stdev=39.85, samples=20 00:31:36.272 lat (msec) : 20=0.75%, 50=10.49%, 100=73.16%, 250=15.60% 00:31:36.272 cpu : usr=41.40%, sys=1.49%, ctx=1099, majf=0, minf=10 00:31:36.272 IO depths : 1=0.1%, 2=2.4%, 4=9.7%, 8=72.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=90.3%, 8=7.6%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename0: (groupid=0, jobs=1): err= 0: pid=87029: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=227, BW=908KiB/s (930kB/s)(9128KiB/10049msec) 00:31:36.272 slat (usec): min=5, max=8025, avg=25.79, stdev=230.56 00:31:36.272 clat (msec): min=4, max=134, avg=70.25, stdev=22.18 00:31:36.272 lat (msec): min=4, max=134, avg=70.28, stdev=22.19 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 6], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 55], 00:31:36.272 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:31:36.272 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 111], 00:31:36.272 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 130], 00:31:36.272 | 99.99th=[ 134] 00:31:36.272 bw ( KiB/s): min= 664, max= 1440, per=4.24%, avg=906.45, stdev=167.11, samples=20 00:31:36.272 iops : min= 166, max= 360, avg=226.60, stdev=41.77, samples=20 00:31:36.272 lat (msec) : 10=2.10%, 20=0.61%, 50=14.02%, 100=72.61%, 250=10.65% 00:31:36.272 cpu : usr=36.16%, sys=1.98%, ctx=1202, majf=0, minf=9 00:31:36.272 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename1: (groupid=0, jobs=1): err= 0: pid=87030: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=226, BW=907KiB/s (929kB/s)(9092KiB/10024msec) 00:31:36.272 slat (usec): min=4, max=8036, avg=31.34, stdev=252.42 00:31:36.272 clat (msec): min=29, max=131, avg=70.36, stdev=19.45 00:31:36.272 lat (msec): min=29, max=131, avg=70.39, stdev=19.45 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:31:36.272 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:31:36.272 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 110], 00:31:36.272 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:31:36.272 | 99.99th=[ 132] 00:31:36.272 bw ( KiB/s): min= 664, max= 1040, per=4.23%, avg=902.45, stdev=117.55, samples=20 00:31:36.272 iops : min= 166, max= 260, avg=225.60, stdev=29.38, samples=20 00:31:36.272 lat (msec) : 50=18.26%, 100=71.98%, 250=9.77% 00:31:36.272 cpu : usr=42.29%, sys=1.92%, ctx=1176, majf=0, minf=9 00:31:36.272 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename1: (groupid=0, jobs=1): err= 0: pid=87031: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=231, BW=926KiB/s (949kB/s)(9268KiB/10004msec) 00:31:36.272 slat (usec): min=4, max=8036, avg=30.68, stdev=267.66 00:31:36.272 clat (msec): min=4, max=144, avg=68.90, stdev=21.33 00:31:36.272 lat (msec): min=4, max=144, avg=68.93, stdev=21.33 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 27], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:31:36.272 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:31:36.272 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 110], 00:31:36.272 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 136], 99.95th=[ 136], 00:31:36.272 | 99.99th=[ 144] 00:31:36.272 bw ( KiB/s): min= 664, max= 1072, per=4.26%, avg=910.84, stdev=126.41, samples=19 00:31:36.272 iops : min= 166, max= 268, avg=227.68, stdev=31.61, samples=19 00:31:36.272 lat (msec) : 10=0.65%, 20=0.04%, 50=20.63%, 100=68.06%, 250=10.62% 00:31:36.272 cpu : usr=39.97%, sys=1.67%, ctx=1183, majf=0, minf=9 00:31:36.272 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename1: (groupid=0, jobs=1): err= 0: pid=87032: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=218, BW=875KiB/s (896kB/s)(8776KiB/10035msec) 00:31:36.272 slat (nsec): min=5993, max=62491, avg=17568.31, stdev=8750.66 00:31:36.272 clat (msec): min=33, max=155, avg=73.03, stdev=20.43 00:31:36.272 lat (msec): min=33, max=155, avg=73.04, stdev=20.43 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:31:36.272 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:31:36.272 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 107], 95.00th=[ 109], 00:31:36.272 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:31:36.272 | 99.99th=[ 157] 00:31:36.272 bw ( KiB/s): min= 584, max= 1010, per=4.08%, avg=871.30, stdev=126.02, samples=20 00:31:36.272 iops : min= 146, max= 252, avg=217.80, stdev=31.48, samples=20 00:31:36.272 lat (msec) : 50=16.41%, 100=70.51%, 250=13.08% 00:31:36.272 cpu : usr=36.81%, sys=1.47%, ctx=1046, majf=0, minf=9 00:31:36.272 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=79.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename1: (groupid=0, jobs=1): err= 0: pid=87033: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=223, BW=894KiB/s (916kB/s)(8956KiB/10015msec) 00:31:36.272 slat (usec): min=4, max=8044, avg=29.38, stdev=254.27 00:31:36.272 clat (msec): min=21, max=140, avg=71.44, stdev=20.30 00:31:36.272 lat (msec): min=21, max=140, avg=71.47, stdev=20.30 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:31:36.272 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 73], 00:31:36.272 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 111], 00:31:36.272 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 140], 00:31:36.272 | 99.99th=[ 142] 00:31:36.272 bw ( KiB/s): min= 616, max= 1024, per=4.17%, avg=889.25, stdev=121.33, samples=20 00:31:36.272 iops : min= 154, max= 256, avg=222.30, stdev=30.33, samples=20 00:31:36.272 lat (msec) : 50=18.58%, 100=70.75%, 250=10.67% 00:31:36.272 cpu : usr=37.09%, sys=1.43%, ctx=1079, majf=0, minf=9 00:31:36.272 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename1: (groupid=0, jobs=1): err= 0: pid=87034: Thu Jul 11 21:44:55 2024 00:31:36.272 read: IOPS=232, BW=929KiB/s (951kB/s)(9300KiB/10010msec) 00:31:36.272 slat (usec): min=4, max=8034, avg=30.42, stdev=308.92 00:31:36.272 clat (msec): min=10, max=156, avg=68.75, stdev=21.08 00:31:36.272 lat (msec): min=10, max=156, avg=68.78, stdev=21.07 00:31:36.272 clat percentiles (msec): 00:31:36.272 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:31:36.272 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:31:36.272 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 109], 00:31:36.272 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 157], 00:31:36.272 | 99.99th=[ 157] 00:31:36.272 bw ( KiB/s): min= 664, max= 1072, per=4.30%, avg=917.89, stdev=120.82, samples=19 00:31:36.272 iops : min= 166, max= 268, avg=229.47, stdev=30.21, samples=19 00:31:36.272 lat (msec) : 20=0.52%, 50=22.80%, 100=67.40%, 250=9.29% 00:31:36.272 cpu : usr=31.52%, sys=1.25%, ctx=1060, majf=0, minf=9 00:31:36.272 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:31:36.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.272 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.272 filename1: (groupid=0, jobs=1): err= 0: pid=87035: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=239, BW=957KiB/s (980kB/s)(9568KiB/10001msec) 00:31:36.273 slat (usec): min=4, max=8034, avg=32.19, stdev=312.45 00:31:36.273 clat (usec): min=898, max=136717, avg=66756.45, stdev=22630.54 00:31:36.273 lat (usec): min=907, max=136731, avg=66788.64, stdev=22624.93 00:31:36.273 clat percentiles (usec): 00:31:36.273 | 1.00th=[ 1958], 5.00th=[ 38011], 10.00th=[ 43779], 20.00th=[ 47973], 00:31:36.273 | 30.00th=[ 54264], 40.00th=[ 60031], 50.00th=[ 66847], 60.00th=[ 70779], 00:31:36.273 | 70.00th=[ 74974], 80.00th=[ 83362], 90.00th=[ 96994], 95.00th=[108528], 00:31:36.273 | 99.00th=[120062], 99.50th=[123208], 99.90th=[135267], 99.95th=[137364], 00:31:36.273 | 99.99th=[137364] 00:31:36.273 bw ( KiB/s): min= 664, max= 1072, per=4.32%, avg=923.79, stdev=122.24, samples=19 00:31:36.273 iops : min= 166, max= 268, avg=230.95, stdev=30.56, samples=19 00:31:36.273 lat (usec) : 1000=0.25% 00:31:36.273 lat (msec) : 2=0.96%, 4=0.75%, 10=0.46%, 20=0.38%, 50=21.74% 00:31:36.273 lat (msec) : 100=66.35%, 250=9.11% 00:31:36.273 cpu : usr=39.56%, sys=1.63%, ctx=1124, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=0.8%, 4=2.8%, 8=81.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename1: (groupid=0, jobs=1): err= 0: pid=87036: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=226, BW=907KiB/s (929kB/s)(9072KiB/10005msec) 00:31:36.273 slat (usec): min=4, max=8044, avg=37.13, stdev=359.45 00:31:36.273 clat (msec): min=6, max=133, avg=70.38, stdev=21.03 00:31:36.273 lat (msec): min=6, max=140, avg=70.42, stdev=21.04 00:31:36.273 clat percentiles (msec): 00:31:36.273 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:31:36.273 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:31:36.273 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 105], 95.00th=[ 114], 00:31:36.273 | 99.00th=[ 124], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 134], 00:31:36.273 | 99.99th=[ 134] 00:31:36.273 bw ( KiB/s): min= 664, max= 1024, per=4.19%, avg=895.68, stdev=114.07, samples=19 00:31:36.273 iops : min= 166, max= 256, avg=223.89, stdev=28.49, samples=19 00:31:36.273 lat (msec) : 10=0.26%, 20=0.31%, 50=19.71%, 100=68.39%, 250=11.33% 00:31:36.273 cpu : usr=38.89%, sys=1.81%, ctx=1177, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename1: (groupid=0, jobs=1): err= 0: pid=87037: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=222, BW=890KiB/s (911kB/s)(8912KiB/10016msec) 00:31:36.273 slat (usec): min=4, max=10028, avg=43.52, stdev=466.47 00:31:36.273 clat (msec): min=21, max=156, avg=71.77, stdev=20.62 00:31:36.273 lat (msec): min=21, max=156, avg=71.81, stdev=20.62 00:31:36.273 clat percentiles (msec): 00:31:36.273 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:31:36.273 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:31:36.273 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 110], 00:31:36.273 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 157], 00:31:36.273 | 99.99th=[ 157] 00:31:36.273 bw ( KiB/s): min= 656, max= 1040, per=4.14%, avg=884.80, stdev=128.25, samples=20 00:31:36.273 iops : min= 164, max= 260, avg=221.20, stdev=32.06, samples=20 00:31:36.273 lat (msec) : 50=19.88%, 100=69.03%, 250=11.09% 00:31:36.273 cpu : usr=31.57%, sys=1.18%, ctx=1027, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename2: (groupid=0, jobs=1): err= 0: pid=87038: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=226, BW=905KiB/s (927kB/s)(9064KiB/10014msec) 00:31:36.273 slat (usec): min=4, max=8029, avg=27.92, stdev=269.47 00:31:36.273 clat (msec): min=19, max=130, avg=70.56, stdev=19.88 00:31:36.273 lat (msec): min=19, max=130, avg=70.59, stdev=19.87 00:31:36.273 clat percentiles (msec): 00:31:36.273 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:31:36.273 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:31:36.273 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 109], 00:31:36.273 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:31:36.273 | 99.99th=[ 131] 00:31:36.273 bw ( KiB/s): min= 664, max= 1072, per=4.22%, avg=900.00, stdev=118.47, samples=20 00:31:36.273 iops : min= 166, max= 268, avg=225.00, stdev=29.62, samples=20 00:31:36.273 lat (msec) : 20=0.26%, 50=18.76%, 100=71.36%, 250=9.62% 00:31:36.273 cpu : usr=33.50%, sys=1.58%, ctx=918, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename2: (groupid=0, jobs=1): err= 0: pid=87039: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=214, BW=857KiB/s (878kB/s)(8600KiB/10033msec) 00:31:36.273 slat (usec): min=7, max=8035, avg=27.21, stdev=259.27 00:31:36.273 clat (msec): min=35, max=155, avg=74.47, stdev=19.97 00:31:36.273 lat (msec): min=35, max=155, avg=74.49, stdev=19.97 00:31:36.273 clat percentiles (msec): 00:31:36.273 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:31:36.273 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:31:36.273 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 115], 00:31:36.273 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:31:36.273 | 99.99th=[ 157] 00:31:36.273 bw ( KiB/s): min= 608, max= 1010, per=4.00%, avg=853.70, stdev=134.64, samples=20 00:31:36.273 iops : min= 152, max= 252, avg=213.40, stdev=33.63, samples=20 00:31:36.273 lat (msec) : 50=12.79%, 100=74.98%, 250=12.23% 00:31:36.273 cpu : usr=33.33%, sys=1.38%, ctx=887, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=78.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=88.8%, 8=10.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename2: (groupid=0, jobs=1): err= 0: pid=87040: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=222, BW=889KiB/s (910kB/s)(8936KiB/10050msec) 00:31:36.273 slat (nsec): min=3856, max=54282, avg=17015.85, stdev=8290.19 00:31:36.273 clat (msec): min=5, max=156, avg=71.83, stdev=22.94 00:31:36.273 lat (msec): min=5, max=156, avg=71.85, stdev=22.94 00:31:36.273 clat percentiles (msec): 00:31:36.273 | 1.00th=[ 9], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 55], 00:31:36.273 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 74], 00:31:36.273 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 112], 00:31:36.273 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:31:36.273 | 99.99th=[ 157] 00:31:36.273 bw ( KiB/s): min= 584, max= 1296, per=4.15%, avg=886.80, stdev=160.07, samples=20 00:31:36.273 iops : min= 146, max= 324, avg=221.70, stdev=40.02, samples=20 00:31:36.273 lat (msec) : 10=1.43%, 20=1.34%, 50=13.88%, 100=69.34%, 250=14.01% 00:31:36.273 cpu : usr=40.97%, sys=1.70%, ctx=1360, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=80.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename2: (groupid=0, jobs=1): err= 0: pid=87041: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=227, BW=908KiB/s (930kB/s)(9096KiB/10015msec) 00:31:36.273 slat (nsec): min=5504, max=58481, avg=17328.27, stdev=8719.36 00:31:36.273 clat (msec): min=19, max=156, avg=70.37, stdev=21.28 00:31:36.273 lat (msec): min=19, max=156, avg=70.38, stdev=21.28 00:31:36.273 clat percentiles (msec): 00:31:36.273 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 50], 00:31:36.273 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:31:36.273 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 105], 95.00th=[ 112], 00:31:36.273 | 99.00th=[ 129], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 157], 00:31:36.273 | 99.99th=[ 157] 00:31:36.273 bw ( KiB/s): min= 664, max= 1048, per=4.23%, avg=903.20, stdev=118.09, samples=20 00:31:36.273 iops : min= 166, max= 262, avg=225.80, stdev=29.52, samples=20 00:31:36.273 lat (msec) : 20=0.22%, 50=20.49%, 100=67.72%, 250=11.57% 00:31:36.273 cpu : usr=34.67%, sys=1.47%, ctx=1271, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=80.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename2: (groupid=0, jobs=1): err= 0: pid=87042: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=223, BW=893KiB/s (915kB/s)(8956KiB/10025msec) 00:31:36.273 slat (usec): min=6, max=8035, avg=38.56, stdev=334.07 00:31:36.273 clat (msec): min=33, max=151, avg=71.37, stdev=19.97 00:31:36.273 lat (msec): min=33, max=151, avg=71.41, stdev=19.98 00:31:36.273 clat percentiles (msec): 00:31:36.273 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:31:36.273 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:31:36.273 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 112], 00:31:36.273 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 140], 00:31:36.273 | 99.99th=[ 153] 00:31:36.273 bw ( KiB/s): min= 632, max= 1024, per=4.17%, avg=891.65, stdev=123.15, samples=20 00:31:36.273 iops : min= 158, max= 256, avg=222.90, stdev=30.78, samples=20 00:31:36.273 lat (msec) : 50=16.35%, 100=72.40%, 250=11.26% 00:31:36.273 cpu : usr=39.20%, sys=1.77%, ctx=1186, majf=0, minf=9 00:31:36.273 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:31:36.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.273 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.273 filename2: (groupid=0, jobs=1): err= 0: pid=87043: Thu Jul 11 21:44:55 2024 00:31:36.273 read: IOPS=218, BW=874KiB/s (895kB/s)(8764KiB/10026msec) 00:31:36.273 slat (usec): min=7, max=8046, avg=31.77, stdev=331.61 00:31:36.274 clat (msec): min=28, max=135, avg=73.03, stdev=20.22 00:31:36.274 lat (msec): min=29, max=135, avg=73.06, stdev=20.22 00:31:36.274 clat percentiles (msec): 00:31:36.274 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:31:36.274 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:31:36.274 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 106], 95.00th=[ 114], 00:31:36.274 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 132], 00:31:36.274 | 99.99th=[ 136] 00:31:36.274 bw ( KiB/s): min= 616, max= 1024, per=4.09%, avg=872.00, stdev=118.00, samples=20 00:31:36.274 iops : min= 154, max= 256, avg=218.00, stdev=29.50, samples=20 00:31:36.274 lat (msec) : 50=16.52%, 100=71.34%, 250=12.14% 00:31:36.274 cpu : usr=36.83%, sys=1.43%, ctx=1191, majf=0, minf=9 00:31:36.274 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:31:36.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.274 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.274 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.274 filename2: (groupid=0, jobs=1): err= 0: pid=87044: Thu Jul 11 21:44:55 2024 00:31:36.274 read: IOPS=216, BW=867KiB/s (887kB/s)(8696KiB/10035msec) 00:31:36.274 slat (usec): min=7, max=8024, avg=27.25, stdev=229.94 00:31:36.274 clat (msec): min=33, max=151, avg=73.65, stdev=20.81 00:31:36.274 lat (msec): min=33, max=151, avg=73.68, stdev=20.80 00:31:36.274 clat percentiles (msec): 00:31:36.274 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:31:36.274 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:31:36.274 | 70.00th=[ 80], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 114], 00:31:36.274 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 153], 00:31:36.274 | 99.99th=[ 153] 00:31:36.274 bw ( KiB/s): min= 616, max= 1138, per=4.04%, avg=863.30, stdev=156.17, samples=20 00:31:36.274 iops : min= 154, max= 284, avg=215.80, stdev=39.00, samples=20 00:31:36.274 lat (msec) : 50=16.10%, 100=71.07%, 250=12.83% 00:31:36.274 cpu : usr=40.30%, sys=1.81%, ctx=1291, majf=0, minf=9 00:31:36.274 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:31:36.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.274 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.274 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.274 filename2: (groupid=0, jobs=1): err= 0: pid=87045: Thu Jul 11 21:44:55 2024 00:31:36.274 read: IOPS=220, BW=883KiB/s (904kB/s)(8848KiB/10020msec) 00:31:36.274 slat (usec): min=4, max=12029, avg=39.26, stdev=434.02 00:31:36.274 clat (msec): min=31, max=135, avg=72.20, stdev=20.36 00:31:36.274 lat (msec): min=31, max=135, avg=72.24, stdev=20.38 00:31:36.274 clat percentiles (msec): 00:31:36.274 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:31:36.274 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:31:36.274 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 111], 00:31:36.274 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:31:36.274 | 99.99th=[ 136] 00:31:36.274 bw ( KiB/s): min= 664, max= 1024, per=4.13%, avg=881.00, stdev=115.91, samples=20 00:31:36.274 iops : min= 166, max= 256, avg=220.20, stdev=29.03, samples=20 00:31:36.274 lat (msec) : 50=18.35%, 100=70.39%, 250=11.26% 00:31:36.274 cpu : usr=31.36%, sys=1.38%, ctx=1068, majf=0, minf=9 00:31:36.274 IO depths : 1=0.1%, 2=0.9%, 4=4.0%, 8=79.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:31:36.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.274 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.274 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:36.274 00:31:36.274 Run status group 0 (all jobs): 00:31:36.274 READ: bw=20.8MiB/s (21.9MB/s), 841KiB/s-957KiB/s (861kB/s-980kB/s), io=209MiB (220MB), run=10001-10050msec 00:31:36.274 21:44:55 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:36.274 21:44:55 -- target/dif.sh@43 -- # local sub 00:31:36.274 21:44:55 -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.274 21:44:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:36.274 21:44:55 -- target/dif.sh@36 -- # local sub_id=0 00:31:36.274 21:44:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.274 21:44:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:36.274 21:44:55 -- target/dif.sh@36 -- # local sub_id=1 00:31:36.274 21:44:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.274 21:44:55 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:36.274 21:44:55 -- target/dif.sh@36 -- # local sub_id=2 00:31:36.274 21:44:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:36.274 21:44:55 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:36.274 21:44:55 -- target/dif.sh@115 -- # numjobs=2 00:31:36.274 21:44:55 -- target/dif.sh@115 -- # iodepth=8 00:31:36.274 21:44:55 -- target/dif.sh@115 -- # runtime=5 00:31:36.274 21:44:55 -- target/dif.sh@115 -- # files=1 00:31:36.274 21:44:55 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:36.274 21:44:55 -- target/dif.sh@28 -- # local sub 00:31:36.274 21:44:55 -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.274 21:44:55 -- target/dif.sh@31 -- # create_subsystem 0 00:31:36.274 21:44:55 -- target/dif.sh@18 -- # local sub_id=0 00:31:36.274 21:44:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 bdev_null0 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 [2024-07-11 21:44:55.426579] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.274 21:44:55 -- target/dif.sh@31 -- # create_subsystem 1 00:31:36.274 21:44:55 -- target/dif.sh@18 -- # local sub_id=1 00:31:36.274 21:44:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 bdev_null1 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.274 21:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.274 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:36.274 21:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.274 21:44:55 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:36.274 21:44:55 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:36.274 21:44:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:36.274 21:44:55 -- nvmf/common.sh@520 -- # config=() 00:31:36.274 21:44:55 -- nvmf/common.sh@520 -- # local subsystem config 00:31:36.274 21:44:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:36.274 21:44:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:36.274 { 00:31:36.274 "params": { 00:31:36.274 "name": "Nvme$subsystem", 00:31:36.274 "trtype": "$TEST_TRANSPORT", 00:31:36.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.274 "adrfam": "ipv4", 00:31:36.274 "trsvcid": "$NVMF_PORT", 00:31:36.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.274 "hdgst": ${hdgst:-false}, 00:31:36.274 "ddgst": ${ddgst:-false} 00:31:36.274 }, 00:31:36.274 "method": "bdev_nvme_attach_controller" 00:31:36.274 } 00:31:36.274 EOF 00:31:36.274 )") 00:31:36.274 21:44:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.274 21:44:55 -- target/dif.sh@82 -- # gen_fio_conf 00:31:36.274 21:44:55 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.274 21:44:55 -- target/dif.sh@54 -- # local file 00:31:36.274 21:44:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:36.275 21:44:55 -- target/dif.sh@56 -- # cat 00:31:36.275 21:44:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.275 21:44:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:36.275 21:44:55 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:36.275 21:44:55 -- common/autotest_common.sh@1320 -- # shift 00:31:36.275 21:44:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:36.275 21:44:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.275 21:44:55 -- nvmf/common.sh@542 -- # cat 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:36.275 21:44:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:36.275 21:44:55 -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.275 21:44:55 -- target/dif.sh@73 -- # cat 00:31:36.275 21:44:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:36.275 21:44:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:36.275 { 00:31:36.275 "params": { 00:31:36.275 "name": "Nvme$subsystem", 00:31:36.275 "trtype": "$TEST_TRANSPORT", 00:31:36.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.275 "adrfam": "ipv4", 00:31:36.275 "trsvcid": "$NVMF_PORT", 00:31:36.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.275 "hdgst": ${hdgst:-false}, 00:31:36.275 "ddgst": ${ddgst:-false} 00:31:36.275 }, 00:31:36.275 "method": "bdev_nvme_attach_controller" 00:31:36.275 } 00:31:36.275 EOF 00:31:36.275 )") 00:31:36.275 21:44:55 -- nvmf/common.sh@542 -- # cat 00:31:36.275 21:44:55 -- target/dif.sh@72 -- # (( file++ )) 00:31:36.275 21:44:55 -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.275 21:44:55 -- nvmf/common.sh@544 -- # jq . 00:31:36.275 21:44:55 -- nvmf/common.sh@545 -- # IFS=, 00:31:36.275 21:44:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:36.275 "params": { 00:31:36.275 "name": "Nvme0", 00:31:36.275 "trtype": "tcp", 00:31:36.275 "traddr": "10.0.0.2", 00:31:36.275 "adrfam": "ipv4", 00:31:36.275 "trsvcid": "4420", 00:31:36.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.275 "hdgst": false, 00:31:36.275 "ddgst": false 00:31:36.275 }, 00:31:36.275 "method": "bdev_nvme_attach_controller" 00:31:36.275 },{ 00:31:36.275 "params": { 00:31:36.275 "name": "Nvme1", 00:31:36.275 "trtype": "tcp", 00:31:36.275 "traddr": "10.0.0.2", 00:31:36.275 "adrfam": "ipv4", 00:31:36.275 "trsvcid": "4420", 00:31:36.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.275 "hdgst": false, 00:31:36.275 "ddgst": false 00:31:36.275 }, 00:31:36.275 "method": "bdev_nvme_attach_controller" 00:31:36.275 }' 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:36.275 21:44:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:36.275 21:44:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:36.275 21:44:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:36.275 21:44:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:36.275 21:44:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:36.275 21:44:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.275 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:36.275 ... 00:31:36.275 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:36.275 ... 00:31:36.275 fio-3.35 00:31:36.275 Starting 4 threads 00:31:36.275 [2024-07-11 21:44:56.111574] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:36.275 [2024-07-11 21:44:56.111653] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:40.453 00:31:40.453 filename0: (groupid=0, jobs=1): err= 0: pid=87192: Thu Jul 11 21:45:01 2024 00:31:40.453 read: IOPS=2274, BW=17.8MiB/s (18.6MB/s)(88.9MiB/5003msec) 00:31:40.453 slat (nsec): min=6710, max=47717, avg=11285.13, stdev=3966.91 00:31:40.453 clat (usec): min=1147, max=6799, avg=3487.86, stdev=959.72 00:31:40.453 lat (usec): min=1155, max=6813, avg=3499.15, stdev=959.60 00:31:40.453 clat percentiles (usec): 00:31:40.453 | 1.00th=[ 1549], 5.00th=[ 1958], 10.00th=[ 2008], 20.00th=[ 2540], 00:31:40.453 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3687], 60.00th=[ 3785], 00:31:40.453 | 70.00th=[ 4228], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4752], 00:31:40.453 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5080], 99.95th=[ 5407], 00:31:40.453 | 99.99th=[ 5932] 00:31:40.453 bw ( KiB/s): min=16734, max=18624, per=26.70%, avg=18195.33, stdev=570.32, samples=9 00:31:40.453 iops : min= 2091, max= 2328, avg=2274.33, stdev=71.53, samples=9 00:31:40.453 lat (msec) : 2=9.90%, 4=56.23%, 10=33.88% 00:31:40.453 cpu : usr=92.38%, sys=6.72%, ctx=5, majf=0, minf=0 00:31:40.453 IO depths : 1=0.1%, 2=3.7%, 4=61.9%, 8=34.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 issued rwts: total=11377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:40.453 filename0: (groupid=0, jobs=1): err= 0: pid=87193: Thu Jul 11 21:45:01 2024 00:31:40.453 read: IOPS=2235, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5002msec) 00:31:40.453 slat (nsec): min=7819, max=49867, avg=15025.38, stdev=3543.38 00:31:40.453 clat (usec): min=1202, max=7004, avg=3539.05, stdev=951.50 00:31:40.453 lat (usec): min=1214, max=7018, avg=3554.08, stdev=951.44 00:31:40.453 clat percentiles (usec): 00:31:40.453 | 1.00th=[ 1532], 5.00th=[ 1991], 10.00th=[ 2024], 20.00th=[ 2606], 00:31:40.453 | 30.00th=[ 2900], 40.00th=[ 3359], 50.00th=[ 3687], 60.00th=[ 3785], 00:31:40.453 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 4686], 00:31:40.453 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 5735], 00:31:40.453 | 99.99th=[ 6652] 00:31:40.453 bw ( KiB/s): min=15744, max=18624, per=26.20%, avg=17852.56, stdev=984.10, samples=9 00:31:40.453 iops : min= 1968, max= 2328, avg=2231.56, stdev=123.03, samples=9 00:31:40.453 lat (msec) : 2=7.76%, 4=55.81%, 10=36.43% 00:31:40.453 cpu : usr=92.80%, sys=6.36%, ctx=4, majf=0, minf=9 00:31:40.453 IO depths : 1=0.1%, 2=4.9%, 4=61.2%, 8=33.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 complete : 0=0.0%, 4=98.2%, 8=1.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 issued rwts: total=11181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:40.453 filename1: (groupid=0, jobs=1): err= 0: pid=87194: Thu Jul 11 21:45:01 2024 00:31:40.453 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5003msec) 00:31:40.453 slat (nsec): min=3845, max=79264, avg=13042.44, stdev=4985.92 00:31:40.453 clat (usec): min=1148, max=6846, avg=4270.95, stdev=875.47 00:31:40.453 lat (usec): min=1157, max=6861, avg=4283.99, stdev=874.11 00:31:40.453 clat percentiles (usec): 00:31:40.453 | 1.00th=[ 1991], 5.00th=[ 2409], 10.00th=[ 2900], 20.00th=[ 3720], 00:31:40.453 | 30.00th=[ 3818], 40.00th=[ 4293], 50.00th=[ 4686], 60.00th=[ 4686], 00:31:40.453 | 70.00th=[ 4752], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5800], 00:31:40.453 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6128], 99.95th=[ 6259], 00:31:40.453 | 99.99th=[ 6849] 00:31:40.453 bw ( KiB/s): min=13072, max=18560, per=22.05%, avg=15022.22, stdev=2287.80, samples=9 00:31:40.453 iops : min= 1634, max= 2320, avg=1877.78, stdev=285.97, samples=9 00:31:40.453 lat (msec) : 2=1.48%, 4=33.96%, 10=64.56% 00:31:40.453 cpu : usr=92.54%, sys=6.66%, ctx=7, majf=0, minf=9 00:31:40.453 IO depths : 1=0.1%, 2=19.0%, 4=53.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 issued rwts: total=9266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:40.453 filename1: (groupid=0, jobs=1): err= 0: pid=87195: Thu Jul 11 21:45:01 2024 00:31:40.453 read: IOPS=2157, BW=16.9MiB/s (17.7MB/s)(84.3MiB/5001msec) 00:31:40.453 slat (nsec): min=7565, max=90994, avg=14768.63, stdev=3951.76 00:31:40.453 clat (usec): min=1181, max=6360, avg=3666.82, stdev=965.90 00:31:40.453 lat (usec): min=1190, max=6378, avg=3681.59, stdev=965.00 00:31:40.453 clat percentiles (usec): 00:31:40.453 | 1.00th=[ 1516], 5.00th=[ 1991], 10.00th=[ 2040], 20.00th=[ 2835], 00:31:40.453 | 30.00th=[ 2900], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 4178], 00:31:40.453 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4752], 00:31:40.453 | 99.00th=[ 4948], 99.50th=[ 5473], 99.90th=[ 5997], 99.95th=[ 5997], 00:31:40.453 | 99.99th=[ 6063] 00:31:40.453 bw ( KiB/s): min=13312, max=18624, per=25.17%, avg=17153.89, stdev=1790.99, samples=9 00:31:40.453 iops : min= 1664, max= 2328, avg=2144.22, stdev=223.88, samples=9 00:31:40.453 lat (msec) : 2=7.11%, 4=51.46%, 10=41.43% 00:31:40.453 cpu : usr=92.72%, sys=6.38%, ctx=52, majf=0, minf=9 00:31:40.453 IO depths : 1=0.1%, 2=7.5%, 4=59.8%, 8=32.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.453 issued rwts: total=10788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:40.453 00:31:40.453 Run status group 0 (all jobs): 00:31:40.453 READ: bw=66.5MiB/s (69.8MB/s), 14.5MiB/s-17.8MiB/s (15.2MB/s-18.6MB/s), io=333MiB (349MB), run=5001-5003msec 00:31:40.710 21:45:01 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:40.710 21:45:01 -- target/dif.sh@43 -- # local sub 00:31:40.710 21:45:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.710 21:45:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:40.710 21:45:01 -- target/dif.sh@36 -- # local sub_id=0 00:31:40.710 21:45:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 21:45:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 21:45:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.710 21:45:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:40.710 21:45:01 -- target/dif.sh@36 -- # local sub_id=1 00:31:40.710 21:45:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 21:45:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 00:31:40.710 real 0m23.466s 00:31:40.710 user 2m4.486s 00:31:40.710 sys 0m7.245s 00:31:40.710 21:45:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 ************************************ 00:31:40.710 END TEST fio_dif_rand_params 00:31:40.710 ************************************ 00:31:40.710 21:45:01 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:40.710 21:45:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:40.710 21:45:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 ************************************ 00:31:40.710 START TEST fio_dif_digest 00:31:40.710 ************************************ 00:31:40.710 21:45:01 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:40.710 21:45:01 -- target/dif.sh@123 -- # local NULL_DIF 00:31:40.710 21:45:01 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:40.710 21:45:01 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:40.710 21:45:01 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:40.710 21:45:01 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:40.710 21:45:01 -- target/dif.sh@127 -- # numjobs=3 00:31:40.710 21:45:01 -- target/dif.sh@127 -- # iodepth=3 00:31:40.710 21:45:01 -- target/dif.sh@127 -- # runtime=10 00:31:40.710 21:45:01 -- target/dif.sh@128 -- # hdgst=true 00:31:40.710 21:45:01 -- target/dif.sh@128 -- # ddgst=true 00:31:40.710 21:45:01 -- target/dif.sh@130 -- # create_subsystems 0 00:31:40.710 21:45:01 -- target/dif.sh@28 -- # local sub 00:31:40.710 21:45:01 -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.710 21:45:01 -- target/dif.sh@31 -- # create_subsystem 0 00:31:40.710 21:45:01 -- target/dif.sh@18 -- # local sub_id=0 00:31:40.710 21:45:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 bdev_null0 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 21:45:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 21:45:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 21:45:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.710 21:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.710 21:45:01 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 [2024-07-11 21:45:01.567279] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.710 21:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.710 21:45:01 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:40.710 21:45:01 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:40.710 21:45:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:40.711 21:45:01 -- nvmf/common.sh@520 -- # config=() 00:31:40.711 21:45:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.711 21:45:01 -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.711 21:45:01 -- nvmf/common.sh@520 -- # local subsystem config 00:31:40.711 21:45:01 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.711 21:45:01 -- target/dif.sh@54 -- # local file 00:31:40.711 21:45:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:40.711 21:45:01 -- target/dif.sh@56 -- # cat 00:31:40.711 21:45:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:40.711 { 00:31:40.711 "params": { 00:31:40.711 "name": "Nvme$subsystem", 00:31:40.711 "trtype": "$TEST_TRANSPORT", 00:31:40.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.711 "adrfam": "ipv4", 00:31:40.711 "trsvcid": "$NVMF_PORT", 00:31:40.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.711 "hdgst": ${hdgst:-false}, 00:31:40.711 "ddgst": ${ddgst:-false} 00:31:40.711 }, 00:31:40.711 "method": "bdev_nvme_attach_controller" 00:31:40.711 } 00:31:40.711 EOF 00:31:40.711 )") 00:31:40.711 21:45:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:40.711 21:45:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.711 21:45:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:40.711 21:45:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:40.711 21:45:01 -- common/autotest_common.sh@1320 -- # shift 00:31:40.711 21:45:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:40.711 21:45:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.711 21:45:01 -- nvmf/common.sh@542 -- # cat 00:31:40.711 21:45:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.711 21:45:01 -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:40.711 21:45:01 -- nvmf/common.sh@544 -- # jq . 00:31:40.711 21:45:01 -- nvmf/common.sh@545 -- # IFS=, 00:31:40.711 21:45:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:40.711 "params": { 00:31:40.711 "name": "Nvme0", 00:31:40.711 "trtype": "tcp", 00:31:40.711 "traddr": "10.0.0.2", 00:31:40.711 "adrfam": "ipv4", 00:31:40.711 "trsvcid": "4420", 00:31:40.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.711 "hdgst": true, 00:31:40.711 "ddgst": true 00:31:40.711 }, 00:31:40.711 "method": "bdev_nvme_attach_controller" 00:31:40.711 }' 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:40.711 21:45:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:40.711 21:45:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:40.711 21:45:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:40.711 21:45:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:40.711 21:45:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:40.711 21:45:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.968 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:40.968 ... 00:31:40.968 fio-3.35 00:31:40.968 Starting 3 threads 00:31:41.225 [2024-07-11 21:45:02.171657] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:41.225 [2024-07-11 21:45:02.171742] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:53.446 00:31:53.446 filename0: (groupid=0, jobs=1): err= 0: pid=87301: Thu Jul 11 21:45:12 2024 00:31:53.446 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(288MiB/10011msec) 00:31:53.446 slat (nsec): min=7736, max=81430, avg=18405.81, stdev=9255.66 00:31:53.446 clat (usec): min=12664, max=17010, avg=12978.00, stdev=211.37 00:31:53.446 lat (usec): min=12673, max=17044, avg=12996.41, stdev=213.79 00:31:53.446 clat percentiles (usec): 00:31:53.446 | 1.00th=[12780], 5.00th=[12780], 10.00th=[12780], 20.00th=[12911], 00:31:53.446 | 30.00th=[12911], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:31:53.446 | 70.00th=[13042], 80.00th=[13042], 90.00th=[13173], 95.00th=[13173], 00:31:53.446 | 99.00th=[13435], 99.50th=[13698], 99.90th=[16909], 99.95th=[16909], 00:31:53.446 | 99.99th=[16909] 00:31:53.446 bw ( KiB/s): min=29184, max=29952, per=33.33%, avg=29491.20, stdev=386.02, samples=20 00:31:53.446 iops : min= 228, max= 234, avg=230.40, stdev= 3.02, samples=20 00:31:53.446 lat (msec) : 20=100.00% 00:31:53.446 cpu : usr=92.91%, sys=6.35%, ctx=103, majf=0, minf=0 00:31:53.446 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.446 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:53.446 filename0: (groupid=0, jobs=1): err= 0: pid=87302: Thu Jul 11 21:45:12 2024 00:31:53.446 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(288MiB/10010msec) 00:31:53.446 slat (nsec): min=8155, max=81929, avg=19791.41, stdev=8511.45 00:31:53.446 clat (usec): min=12718, max=15415, avg=12970.78, stdev=172.42 00:31:53.446 lat (usec): min=12732, max=15439, avg=12990.57, stdev=175.80 00:31:53.446 clat percentiles (usec): 00:31:53.446 | 1.00th=[12780], 5.00th=[12780], 10.00th=[12780], 20.00th=[12911], 00:31:53.446 | 30.00th=[12911], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:31:53.446 | 70.00th=[13042], 80.00th=[13042], 90.00th=[13173], 95.00th=[13173], 00:31:53.446 | 99.00th=[13304], 99.50th=[13698], 99.90th=[15401], 99.95th=[15401], 00:31:53.446 | 99.99th=[15401] 00:31:53.446 bw ( KiB/s): min=29184, max=29952, per=33.33%, avg=29494.10, stdev=383.80, samples=20 00:31:53.446 iops : min= 228, max= 234, avg=230.40, stdev= 3.02, samples=20 00:31:53.446 lat (msec) : 20=100.00% 00:31:53.446 cpu : usr=92.92%, sys=6.52%, ctx=11, majf=0, minf=0 00:31:53.446 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.446 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:53.446 filename0: (groupid=0, jobs=1): err= 0: pid=87303: Thu Jul 11 21:45:12 2024 00:31:53.446 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(288MiB/10008msec) 00:31:53.446 slat (nsec): min=7892, max=80311, avg=19579.33, stdev=8776.49 00:31:53.446 clat (usec): min=11782, max=14534, avg=12970.49, stdev=161.74 00:31:53.446 lat (usec): min=11793, max=14561, avg=12990.07, stdev=165.09 00:31:53.446 clat percentiles (usec): 00:31:53.446 | 1.00th=[12780], 5.00th=[12780], 10.00th=[12780], 20.00th=[12911], 00:31:53.446 | 30.00th=[12911], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:31:53.446 | 70.00th=[13042], 80.00th=[13042], 90.00th=[13173], 95.00th=[13173], 00:31:53.446 | 99.00th=[13304], 99.50th=[13960], 99.90th=[14484], 99.95th=[14484], 00:31:53.446 | 99.99th=[14484] 00:31:53.446 bw ( KiB/s): min=29184, max=29952, per=33.33%, avg=29491.20, stdev=386.02, samples=20 00:31:53.446 iops : min= 228, max= 234, avg=230.40, stdev= 3.02, samples=20 00:31:53.446 lat (msec) : 20=100.00% 00:31:53.446 cpu : usr=93.07%, sys=6.41%, ctx=24, majf=0, minf=9 00:31:53.446 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.446 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:53.446 00:31:53.446 Run status group 0 (all jobs): 00:31:53.446 READ: bw=86.4MiB/s (90.6MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=865MiB (907MB), run=10008-10011msec 00:31:53.446 21:45:12 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:53.446 21:45:12 -- target/dif.sh@43 -- # local sub 00:31:53.446 21:45:12 -- target/dif.sh@45 -- # for sub in "$@" 00:31:53.446 21:45:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:53.446 21:45:12 -- target/dif.sh@36 -- # local sub_id=0 00:31:53.446 21:45:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:53.446 21:45:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.446 21:45:12 -- common/autotest_common.sh@10 -- # set +x 00:31:53.446 21:45:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.446 21:45:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:53.446 21:45:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.446 21:45:12 -- common/autotest_common.sh@10 -- # set +x 00:31:53.446 21:45:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.446 00:31:53.446 real 0m11.022s 00:31:53.446 user 0m28.543s 00:31:53.446 sys 0m2.211s 00:31:53.446 21:45:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.446 21:45:12 -- common/autotest_common.sh@10 -- # set +x 00:31:53.446 ************************************ 00:31:53.446 END TEST fio_dif_digest 00:31:53.446 ************************************ 00:31:53.446 21:45:12 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:53.446 21:45:12 -- target/dif.sh@147 -- # nvmftestfini 00:31:53.446 21:45:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:53.446 21:45:12 -- nvmf/common.sh@116 -- # sync 00:31:53.446 21:45:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:53.446 21:45:12 -- nvmf/common.sh@119 -- # set +e 00:31:53.446 21:45:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:53.446 21:45:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:53.446 rmmod nvme_tcp 00:31:53.446 rmmod nvme_fabrics 00:31:53.446 rmmod nvme_keyring 00:31:53.446 21:45:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:53.446 21:45:12 -- nvmf/common.sh@123 -- # set -e 00:31:53.446 21:45:12 -- nvmf/common.sh@124 -- # return 0 00:31:53.446 21:45:12 -- nvmf/common.sh@477 -- # '[' -n 86543 ']' 00:31:53.446 21:45:12 -- nvmf/common.sh@478 -- # killprocess 86543 00:31:53.446 21:45:12 -- common/autotest_common.sh@926 -- # '[' -z 86543 ']' 00:31:53.446 21:45:12 -- common/autotest_common.sh@930 -- # kill -0 86543 00:31:53.446 21:45:12 -- common/autotest_common.sh@931 -- # uname 00:31:53.446 21:45:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:53.446 21:45:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86543 00:31:53.446 21:45:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:53.446 killing process with pid 86543 00:31:53.446 21:45:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:53.446 21:45:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86543' 00:31:53.446 21:45:12 -- common/autotest_common.sh@945 -- # kill 86543 00:31:53.446 21:45:12 -- common/autotest_common.sh@950 -- # wait 86543 00:31:53.446 21:45:12 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:53.446 21:45:12 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:53.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:53.446 Waiting for block devices as requested 00:31:53.446 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:53.446 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:31:53.446 21:45:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:53.446 21:45:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:53.446 21:45:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.446 21:45:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:53.446 21:45:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.446 21:45:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:53.446 21:45:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.446 21:45:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:31:53.446 00:31:53.446 real 0m59.587s 00:31:53.446 user 3m48.903s 00:31:53.446 sys 0m17.991s 00:31:53.446 21:45:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.446 21:45:13 -- common/autotest_common.sh@10 -- # set +x 00:31:53.446 ************************************ 00:31:53.446 END TEST nvmf_dif 00:31:53.446 ************************************ 00:31:53.446 21:45:13 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:53.446 21:45:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:53.446 21:45:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:53.446 21:45:13 -- common/autotest_common.sh@10 -- # set +x 00:31:53.446 ************************************ 00:31:53.446 START TEST nvmf_abort_qd_sizes 00:31:53.446 ************************************ 00:31:53.446 21:45:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:53.446 * Looking for test storage... 00:31:53.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:53.446 21:45:13 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:53.446 21:45:13 -- nvmf/common.sh@7 -- # uname -s 00:31:53.446 21:45:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.447 21:45:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.447 21:45:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.447 21:45:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.447 21:45:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.447 21:45:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.447 21:45:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.447 21:45:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.447 21:45:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.447 21:45:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.447 21:45:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:31:53.447 21:45:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=65f0dc09-2f81-4c7b-a413-2a2a000e2750 00:31:53.447 21:45:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.447 21:45:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.447 21:45:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:53.447 21:45:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:53.447 21:45:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.447 21:45:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.447 21:45:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.447 21:45:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.447 21:45:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.447 21:45:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.447 21:45:13 -- paths/export.sh@5 -- # export PATH 00:31:53.447 21:45:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.447 21:45:13 -- nvmf/common.sh@46 -- # : 0 00:31:53.447 21:45:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:53.447 21:45:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:53.447 21:45:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:53.447 21:45:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.447 21:45:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.447 21:45:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:53.447 21:45:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:53.447 21:45:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:53.447 21:45:13 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:31:53.447 21:45:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:53.447 21:45:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.447 21:45:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:53.447 21:45:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:53.447 21:45:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:53.447 21:45:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.447 21:45:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:53.447 21:45:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.447 21:45:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:31:53.447 21:45:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:31:53.447 21:45:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:31:53.447 21:45:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:31:53.447 21:45:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:31:53.447 21:45:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:31:53.447 21:45:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.447 21:45:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.447 21:45:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:53.447 21:45:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:31:53.447 21:45:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:53.447 21:45:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:53.447 21:45:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:53.447 21:45:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.447 21:45:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:53.447 21:45:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:53.447 21:45:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:53.447 21:45:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:53.447 21:45:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:31:53.447 21:45:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:31:53.447 Cannot find device "nvmf_tgt_br" 00:31:53.447 21:45:13 -- nvmf/common.sh@154 -- # true 00:31:53.447 21:45:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:31:53.447 Cannot find device "nvmf_tgt_br2" 00:31:53.447 21:45:13 -- nvmf/common.sh@155 -- # true 00:31:53.447 21:45:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:31:53.447 21:45:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:31:53.447 Cannot find device "nvmf_tgt_br" 00:31:53.447 21:45:13 -- nvmf/common.sh@157 -- # true 00:31:53.447 21:45:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:31:53.447 Cannot find device "nvmf_tgt_br2" 00:31:53.447 21:45:13 -- nvmf/common.sh@158 -- # true 00:31:53.447 21:45:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:31:53.447 21:45:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:31:53.447 21:45:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:53.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:53.447 21:45:13 -- nvmf/common.sh@161 -- # true 00:31:53.447 21:45:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:53.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:53.447 21:45:13 -- nvmf/common.sh@162 -- # true 00:31:53.447 21:45:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:31:53.447 21:45:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:53.447 21:45:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:53.447 21:45:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:53.447 21:45:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:53.447 21:45:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:53.447 21:45:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:53.447 21:45:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:53.447 21:45:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:53.447 21:45:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:31:53.447 21:45:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:31:53.447 21:45:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:31:53.447 21:45:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:31:53.447 21:45:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:53.447 21:45:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:53.447 21:45:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:53.447 21:45:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:31:53.447 21:45:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:31:53.447 21:45:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:31:53.447 21:45:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:53.447 21:45:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:53.447 21:45:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:53.447 21:45:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:53.447 21:45:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:31:53.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:31:53.447 00:31:53.447 --- 10.0.0.2 ping statistics --- 00:31:53.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.447 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:31:53.447 21:45:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:31:53.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:53.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:31:53.447 00:31:53.447 --- 10.0.0.3 ping statistics --- 00:31:53.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.447 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:31:53.447 21:45:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:53.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:31:53.447 00:31:53.447 --- 10.0.0.1 ping statistics --- 00:31:53.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.447 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:31:53.447 21:45:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.447 21:45:13 -- nvmf/common.sh@421 -- # return 0 00:31:53.447 21:45:13 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:53.447 21:45:13 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:53.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:53.705 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:53.963 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:31:53.963 21:45:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.963 21:45:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:53.963 21:45:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:53.963 21:45:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.963 21:45:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:53.963 21:45:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:53.963 21:45:14 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:31:53.963 21:45:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:53.963 21:45:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:53.963 21:45:14 -- common/autotest_common.sh@10 -- # set +x 00:31:53.963 21:45:14 -- nvmf/common.sh@469 -- # nvmfpid=87891 00:31:53.963 21:45:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:53.963 21:45:14 -- nvmf/common.sh@470 -- # waitforlisten 87891 00:31:53.963 21:45:14 -- common/autotest_common.sh@819 -- # '[' -z 87891 ']' 00:31:53.963 21:45:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.963 21:45:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:53.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.963 21:45:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.963 21:45:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:53.963 21:45:14 -- common/autotest_common.sh@10 -- # set +x 00:31:53.963 [2024-07-11 21:45:14.848282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:53.963 [2024-07-11 21:45:14.848424] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.278 [2024-07-11 21:45:14.996171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.278 [2024-07-11 21:45:15.115804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:54.278 [2024-07-11 21:45:15.116032] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.278 [2024-07-11 21:45:15.116067] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.279 [2024-07-11 21:45:15.116088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.279 [2024-07-11 21:45:15.117518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.279 [2024-07-11 21:45:15.117625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.279 [2024-07-11 21:45:15.117711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.279 [2024-07-11 21:45:15.117721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.211 21:45:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:55.211 21:45:15 -- common/autotest_common.sh@852 -- # return 0 00:31:55.211 21:45:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:55.211 21:45:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:55.211 21:45:15 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 21:45:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:31:55.211 21:45:15 -- scripts/common.sh@311 -- # local bdf bdfs 00:31:55.211 21:45:15 -- scripts/common.sh@312 -- # local nvmes 00:31:55.211 21:45:15 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:31:55.211 21:45:15 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:31:55.211 21:45:15 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:31:55.211 21:45:15 -- scripts/common.sh@297 -- # local bdf= 00:31:55.211 21:45:15 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:31:55.211 21:45:15 -- scripts/common.sh@232 -- # local class 00:31:55.211 21:45:15 -- scripts/common.sh@233 -- # local subclass 00:31:55.211 21:45:15 -- scripts/common.sh@234 -- # local progif 00:31:55.211 21:45:15 -- scripts/common.sh@235 -- # printf %02x 1 00:31:55.211 21:45:15 -- scripts/common.sh@235 -- # class=01 00:31:55.211 21:45:15 -- scripts/common.sh@236 -- # printf %02x 8 00:31:55.211 21:45:15 -- scripts/common.sh@236 -- # subclass=08 00:31:55.211 21:45:15 -- scripts/common.sh@237 -- # printf %02x 2 00:31:55.211 21:45:15 -- scripts/common.sh@237 -- # progif=02 00:31:55.211 21:45:15 -- scripts/common.sh@239 -- # hash lspci 00:31:55.211 21:45:15 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:31:55.211 21:45:15 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:31:55.211 21:45:15 -- scripts/common.sh@242 -- # grep -i -- -p02 00:31:55.211 21:45:15 -- scripts/common.sh@244 -- # tr -d '"' 00:31:55.211 21:45:15 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:31:55.211 21:45:15 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:55.211 21:45:15 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:31:55.211 21:45:15 -- scripts/common.sh@15 -- # local i 00:31:55.211 21:45:15 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:55.211 21:45:15 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:55.211 21:45:15 -- scripts/common.sh@24 -- # return 0 00:31:55.211 21:45:15 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:31:55.211 21:45:15 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:55.211 21:45:15 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:31:55.211 21:45:15 -- scripts/common.sh@15 -- # local i 00:31:55.211 21:45:15 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:31:55.211 21:45:15 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:55.211 21:45:15 -- scripts/common.sh@24 -- # return 0 00:31:55.211 21:45:15 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:31:55.211 21:45:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:31:55.211 21:45:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:31:55.211 21:45:15 -- scripts/common.sh@322 -- # uname -s 00:31:55.211 21:45:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:31:55.211 21:45:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:31:55.211 21:45:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:31:55.211 21:45:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:31:55.211 21:45:15 -- scripts/common.sh@322 -- # uname -s 00:31:55.211 21:45:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:31:55.211 21:45:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:31:55.211 21:45:15 -- scripts/common.sh@327 -- # (( 2 )) 00:31:55.211 21:45:15 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:31:55.211 21:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:55.211 21:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:55.211 21:45:15 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 ************************************ 00:31:55.211 START TEST spdk_target_abort 00:31:55.211 ************************************ 00:31:55.211 21:45:15 -- common/autotest_common.sh@1104 -- # spdk_target 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:55.211 21:45:15 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:31:55.211 21:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.211 21:45:15 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 spdk_targetn1 00:31:55.211 21:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.211 21:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.211 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 [2024-07-11 21:45:16.070272] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.211 21:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:31:55.211 21:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.211 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 21:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:31:55.211 21:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.211 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 21:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:31:55.211 21:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.211 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:31:55.211 [2024-07-11 21:45:16.102461] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.211 21:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.211 21:45:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:58.489 Initializing NVMe Controllers 00:31:58.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:58.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:58.489 Initialization complete. Launching workers. 00:31:58.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11880, failed: 0 00:31:58.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1016, failed to submit 10864 00:31:58.489 success 741, unsuccess 275, failed 0 00:31:58.489 21:45:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.489 21:45:19 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:01.765 Initializing NVMe Controllers 00:32:01.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:01.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:01.765 Initialization complete. Launching workers. 00:32:01.765 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8944, failed: 0 00:32:01.765 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1144, failed to submit 7800 00:32:01.765 success 423, unsuccess 721, failed 0 00:32:01.765 21:45:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:01.765 21:45:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:05.044 Initializing NVMe Controllers 00:32:05.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:05.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:05.045 Initialization complete. Launching workers. 00:32:05.045 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32102, failed: 0 00:32:05.045 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2190, failed to submit 29912 00:32:05.045 success 489, unsuccess 1701, failed 0 00:32:05.045 21:45:25 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:05.045 21:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.045 21:45:25 -- common/autotest_common.sh@10 -- # set +x 00:32:05.045 21:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.045 21:45:25 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:05.045 21:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.045 21:45:25 -- common/autotest_common.sh@10 -- # set +x 00:32:05.303 21:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.303 21:45:26 -- target/abort_qd_sizes.sh@62 -- # killprocess 87891 00:32:05.303 21:45:26 -- common/autotest_common.sh@926 -- # '[' -z 87891 ']' 00:32:05.303 21:45:26 -- common/autotest_common.sh@930 -- # kill -0 87891 00:32:05.303 21:45:26 -- common/autotest_common.sh@931 -- # uname 00:32:05.303 21:45:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.303 21:45:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87891 00:32:05.303 killing process with pid 87891 00:32:05.303 21:45:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:05.303 21:45:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:05.303 21:45:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87891' 00:32:05.303 21:45:26 -- common/autotest_common.sh@945 -- # kill 87891 00:32:05.303 21:45:26 -- common/autotest_common.sh@950 -- # wait 87891 00:32:05.561 ************************************ 00:32:05.561 END TEST spdk_target_abort 00:32:05.561 ************************************ 00:32:05.561 00:32:05.561 real 0m10.421s 00:32:05.561 user 0m42.399s 00:32:05.561 sys 0m2.288s 00:32:05.561 21:45:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.561 21:45:26 -- common/autotest_common.sh@10 -- # set +x 00:32:05.561 21:45:26 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:05.561 21:45:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.561 21:45:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.561 21:45:26 -- common/autotest_common.sh@10 -- # set +x 00:32:05.561 ************************************ 00:32:05.561 START TEST kernel_target_abort 00:32:05.561 ************************************ 00:32:05.561 21:45:26 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:05.561 21:45:26 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:05.561 21:45:26 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:05.561 21:45:26 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:05.561 21:45:26 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:05.561 21:45:26 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:05.561 21:45:26 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:05.561 21:45:26 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:05.562 21:45:26 -- nvmf/common.sh@627 -- # local block nvme 00:32:05.562 21:45:26 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:05.562 21:45:26 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:05.562 21:45:26 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:05.562 21:45:26 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:06.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:06.164 Waiting for block devices as requested 00:32:06.164 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:06.164 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:32:06.164 21:45:27 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:06.164 21:45:27 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:06.164 21:45:27 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:06.164 21:45:27 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:06.164 21:45:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:32:06.164 No valid GPT data, bailing 00:32:06.164 21:45:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:06.164 21:45:27 -- scripts/common.sh@393 -- # pt= 00:32:06.164 21:45:27 -- scripts/common.sh@394 -- # return 1 00:32:06.164 21:45:27 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:06.164 21:45:27 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:06.164 21:45:27 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:32:06.426 21:45:27 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:32:06.426 21:45:27 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:32:06.426 21:45:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:32:06.426 No valid GPT data, bailing 00:32:06.426 21:45:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:32:06.426 21:45:27 -- scripts/common.sh@393 -- # pt= 00:32:06.426 21:45:27 -- scripts/common.sh@394 -- # return 1 00:32:06.426 21:45:27 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:32:06.426 21:45:27 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:06.426 21:45:27 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:32:06.426 21:45:27 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:32:06.426 21:45:27 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:32:06.426 21:45:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:32:06.426 No valid GPT data, bailing 00:32:06.426 21:45:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:32:06.426 21:45:27 -- scripts/common.sh@393 -- # pt= 00:32:06.426 21:45:27 -- scripts/common.sh@394 -- # return 1 00:32:06.426 21:45:27 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:32:06.426 21:45:27 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:06.426 21:45:27 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:32:06.426 21:45:27 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:32:06.426 21:45:27 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:32:06.426 21:45:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:32:06.426 No valid GPT data, bailing 00:32:06.426 21:45:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:32:06.426 21:45:27 -- scripts/common.sh@393 -- # pt= 00:32:06.426 21:45:27 -- scripts/common.sh@394 -- # return 1 00:32:06.426 21:45:27 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:32:06.426 21:45:27 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:32:06.426 21:45:27 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:06.426 21:45:27 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:06.426 21:45:27 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:06.426 21:45:27 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:06.426 21:45:27 -- nvmf/common.sh@654 -- # echo 1 00:32:06.426 21:45:27 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:32:06.426 21:45:27 -- nvmf/common.sh@656 -- # echo 1 00:32:06.426 21:45:27 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:06.426 21:45:27 -- nvmf/common.sh@663 -- # echo tcp 00:32:06.426 21:45:27 -- nvmf/common.sh@664 -- # echo 4420 00:32:06.426 21:45:27 -- nvmf/common.sh@665 -- # echo ipv4 00:32:06.426 21:45:27 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:06.426 21:45:27 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:65f0dc09-2f81-4c7b-a413-2a2a000e2750 --hostid=65f0dc09-2f81-4c7b-a413-2a2a000e2750 -a 10.0.0.1 -t tcp -s 4420 00:32:06.426 00:32:06.426 Discovery Log Number of Records 2, Generation counter 2 00:32:06.426 =====Discovery Log Entry 0====== 00:32:06.427 trtype: tcp 00:32:06.427 adrfam: ipv4 00:32:06.427 subtype: current discovery subsystem 00:32:06.427 treq: not specified, sq flow control disable supported 00:32:06.427 portid: 1 00:32:06.427 trsvcid: 4420 00:32:06.427 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:06.427 traddr: 10.0.0.1 00:32:06.427 eflags: none 00:32:06.427 sectype: none 00:32:06.427 =====Discovery Log Entry 1====== 00:32:06.427 trtype: tcp 00:32:06.427 adrfam: ipv4 00:32:06.427 subtype: nvme subsystem 00:32:06.427 treq: not specified, sq flow control disable supported 00:32:06.427 portid: 1 00:32:06.427 trsvcid: 4420 00:32:06.427 subnqn: kernel_target 00:32:06.427 traddr: 10.0.0.1 00:32:06.427 eflags: none 00:32:06.427 sectype: none 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:06.427 21:45:27 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:09.705 Initializing NVMe Controllers 00:32:09.705 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:09.705 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:09.705 Initialization complete. Launching workers. 00:32:09.705 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 34900, failed: 0 00:32:09.705 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 34900, failed to submit 0 00:32:09.705 success 0, unsuccess 34900, failed 0 00:32:09.705 21:45:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:09.705 21:45:30 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:12.996 Initializing NVMe Controllers 00:32:12.996 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:12.996 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:12.996 Initialization complete. Launching workers. 00:32:12.996 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68145, failed: 0 00:32:12.996 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28602, failed to submit 39543 00:32:12.996 success 0, unsuccess 28602, failed 0 00:32:12.996 21:45:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:12.996 21:45:33 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:16.280 Initializing NVMe Controllers 00:32:16.280 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:16.280 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:16.280 Initialization complete. Launching workers. 00:32:16.280 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 83051, failed: 0 00:32:16.280 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20758, failed to submit 62293 00:32:16.280 success 0, unsuccess 20758, failed 0 00:32:16.280 21:45:36 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:16.280 21:45:36 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:16.280 21:45:36 -- nvmf/common.sh@677 -- # echo 0 00:32:16.280 21:45:36 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:16.280 21:45:36 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:16.280 21:45:36 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:16.280 21:45:36 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:16.280 21:45:36 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:16.280 21:45:36 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:16.280 00:32:16.280 real 0m10.474s 00:32:16.280 user 0m5.830s 00:32:16.280 sys 0m1.962s 00:32:16.280 21:45:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.280 ************************************ 00:32:16.280 END TEST kernel_target_abort 00:32:16.280 ************************************ 00:32:16.280 21:45:36 -- common/autotest_common.sh@10 -- # set +x 00:32:16.280 21:45:36 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:16.280 21:45:36 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:16.280 21:45:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:16.280 21:45:36 -- nvmf/common.sh@116 -- # sync 00:32:16.280 21:45:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:16.280 21:45:37 -- nvmf/common.sh@119 -- # set +e 00:32:16.280 21:45:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:16.280 21:45:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:16.280 rmmod nvme_tcp 00:32:16.280 rmmod nvme_fabrics 00:32:16.280 rmmod nvme_keyring 00:32:16.280 21:45:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:16.280 21:45:37 -- nvmf/common.sh@123 -- # set -e 00:32:16.280 21:45:37 -- nvmf/common.sh@124 -- # return 0 00:32:16.280 21:45:37 -- nvmf/common.sh@477 -- # '[' -n 87891 ']' 00:32:16.280 21:45:37 -- nvmf/common.sh@478 -- # killprocess 87891 00:32:16.280 21:45:37 -- common/autotest_common.sh@926 -- # '[' -z 87891 ']' 00:32:16.280 21:45:37 -- common/autotest_common.sh@930 -- # kill -0 87891 00:32:16.280 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (87891) - No such process 00:32:16.280 Process with pid 87891 is not found 00:32:16.280 21:45:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 87891 is not found' 00:32:16.280 21:45:37 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:16.280 21:45:37 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:16.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:16.885 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:32:16.885 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:32:16.885 21:45:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:16.885 21:45:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:16.885 21:45:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:16.885 21:45:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:16.885 21:45:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.885 21:45:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:16.885 21:45:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.885 21:45:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:16.885 00:32:16.885 real 0m24.311s 00:32:16.885 user 0m49.587s 00:32:16.885 sys 0m5.547s 00:32:16.885 21:45:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.885 ************************************ 00:32:16.885 END TEST nvmf_abort_qd_sizes 00:32:16.885 21:45:37 -- common/autotest_common.sh@10 -- # set +x 00:32:16.885 ************************************ 00:32:17.143 21:45:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:17.143 21:45:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:17.143 21:45:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:17.143 21:45:37 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:17.143 21:45:37 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:17.143 21:45:37 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:17.143 21:45:37 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:17.143 21:45:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:17.143 21:45:37 -- common/autotest_common.sh@10 -- # set +x 00:32:17.143 21:45:37 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:17.143 21:45:37 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:17.143 21:45:37 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:17.143 21:45:37 -- common/autotest_common.sh@10 -- # set +x 00:32:19.042 INFO: APP EXITING 00:32:19.042 INFO: killing all VMs 00:32:19.042 INFO: killing vhost app 00:32:19.042 INFO: EXIT DONE 00:32:19.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:19.301 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:32:19.301 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:32:20.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:20.237 Cleaning 00:32:20.237 Removing: /var/run/dpdk/spdk0/config 00:32:20.237 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:20.237 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:20.237 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:20.237 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:20.237 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:20.237 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:20.237 Removing: /var/run/dpdk/spdk1/config 00:32:20.237 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:20.237 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:20.237 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:20.237 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:20.237 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:20.237 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:20.237 Removing: /var/run/dpdk/spdk2/config 00:32:20.237 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:20.237 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:20.237 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:20.237 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:20.237 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:20.237 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:20.237 Removing: /var/run/dpdk/spdk3/config 00:32:20.237 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:20.237 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:20.237 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:20.237 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:20.237 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:20.237 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:20.237 Removing: /var/run/dpdk/spdk4/config 00:32:20.237 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:20.237 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:20.237 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:20.237 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:20.237 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:20.237 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:20.237 Removing: /dev/shm/nvmf_trace.0 00:32:20.237 Removing: /dev/shm/spdk_tgt_trace.pid65948 00:32:20.237 Removing: /var/run/dpdk/spdk0 00:32:20.237 Removing: /var/run/dpdk/spdk1 00:32:20.237 Removing: /var/run/dpdk/spdk2 00:32:20.237 Removing: /var/run/dpdk/spdk3 00:32:20.237 Removing: /var/run/dpdk/spdk4 00:32:20.237 Removing: /var/run/dpdk/spdk_pid65804 00:32:20.237 Removing: /var/run/dpdk/spdk_pid65948 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66185 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66375 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66515 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66584 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66659 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66744 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66814 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66853 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66888 00:32:20.237 Removing: /var/run/dpdk/spdk_pid66943 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67043 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67480 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67532 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67583 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67599 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67668 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67684 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67751 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67767 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67813 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67832 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67877 00:32:20.237 Removing: /var/run/dpdk/spdk_pid67894 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68011 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68049 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68122 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68174 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68198 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68257 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68277 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68312 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68331 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68366 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68380 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68420 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68434 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68468 00:32:20.237 Removing: /var/run/dpdk/spdk_pid68488 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68522 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68542 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68571 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68596 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68625 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68650 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68679 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68704 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68733 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68753 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68787 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68807 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68841 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68861 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68895 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68915 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68944 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68969 00:32:20.496 Removing: /var/run/dpdk/spdk_pid68998 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69023 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69052 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69077 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69106 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69133 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69166 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69189 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69226 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69246 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69280 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69300 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69335 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69399 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69491 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69799 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69811 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69842 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69860 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69873 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69897 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69909 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69923 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69941 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69959 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69978 00:32:20.496 Removing: /var/run/dpdk/spdk_pid69996 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70014 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70022 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70040 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70058 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70077 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70095 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70108 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70121 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70156 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70174 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70202 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70258 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70290 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70300 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70328 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70339 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70346 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70387 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70404 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70430 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70438 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70450 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70453 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70466 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70472 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70481 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70494 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70515 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70547 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70555 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70585 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70600 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70602 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70648 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70660 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70686 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70699 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70701 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70714 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70722 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70729 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70741 00:32:20.496 Removing: /var/run/dpdk/spdk_pid70744 00:32:20.756 Removing: /var/run/dpdk/spdk_pid70817 00:32:20.756 Removing: /var/run/dpdk/spdk_pid70865 00:32:20.756 Removing: /var/run/dpdk/spdk_pid70974 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71011 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71055 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71064 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71084 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71104 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71139 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71148 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71216 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71230 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71284 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71383 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71444 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71469 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71563 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71602 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71635 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71856 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71949 00:32:20.756 Removing: /var/run/dpdk/spdk_pid71971 00:32:20.756 Removing: /var/run/dpdk/spdk_pid72292 00:32:20.756 Removing: /var/run/dpdk/spdk_pid72331 00:32:20.756 Removing: /var/run/dpdk/spdk_pid72631 00:32:20.756 Removing: /var/run/dpdk/spdk_pid73041 00:32:20.756 Removing: /var/run/dpdk/spdk_pid73299 00:32:20.756 Removing: /var/run/dpdk/spdk_pid74070 00:32:20.756 Removing: /var/run/dpdk/spdk_pid74884 00:32:20.756 Removing: /var/run/dpdk/spdk_pid75006 00:32:20.756 Removing: /var/run/dpdk/spdk_pid75068 00:32:20.756 Removing: /var/run/dpdk/spdk_pid76328 00:32:20.756 Removing: /var/run/dpdk/spdk_pid76541 00:32:20.756 Removing: /var/run/dpdk/spdk_pid76845 00:32:20.756 Removing: /var/run/dpdk/spdk_pid76954 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77087 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77115 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77141 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77170 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77267 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77406 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77557 00:32:20.756 Removing: /var/run/dpdk/spdk_pid77632 00:32:20.756 Removing: /var/run/dpdk/spdk_pid78021 00:32:20.756 Removing: /var/run/dpdk/spdk_pid78356 00:32:20.756 Removing: /var/run/dpdk/spdk_pid78364 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80552 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80554 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80828 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80843 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80857 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80888 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80898 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80981 00:32:20.756 Removing: /var/run/dpdk/spdk_pid80990 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81098 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81100 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81208 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81216 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81618 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81662 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81771 00:32:20.756 Removing: /var/run/dpdk/spdk_pid81844 00:32:20.756 Removing: /var/run/dpdk/spdk_pid82143 00:32:20.756 Removing: /var/run/dpdk/spdk_pid82346 00:32:20.756 Removing: /var/run/dpdk/spdk_pid82726 00:32:20.756 Removing: /var/run/dpdk/spdk_pid83253 00:32:20.756 Removing: /var/run/dpdk/spdk_pid83694 00:32:20.756 Removing: /var/run/dpdk/spdk_pid83762 00:32:20.756 Removing: /var/run/dpdk/spdk_pid83817 00:32:20.756 Removing: /var/run/dpdk/spdk_pid83877 00:32:20.756 Removing: /var/run/dpdk/spdk_pid83985 00:32:20.756 Removing: /var/run/dpdk/spdk_pid84045 00:32:20.756 Removing: /var/run/dpdk/spdk_pid84105 00:32:20.756 Removing: /var/run/dpdk/spdk_pid84160 00:32:20.756 Removing: /var/run/dpdk/spdk_pid84482 00:32:20.756 Removing: /var/run/dpdk/spdk_pid85656 00:32:20.756 Removing: /var/run/dpdk/spdk_pid85800 00:32:20.756 Removing: /var/run/dpdk/spdk_pid86044 00:32:20.756 Removing: /var/run/dpdk/spdk_pid86600 00:32:20.756 Removing: /var/run/dpdk/spdk_pid86759 00:32:20.756 Removing: /var/run/dpdk/spdk_pid86920 00:32:20.756 Removing: /var/run/dpdk/spdk_pid87018 00:32:20.756 Removing: /var/run/dpdk/spdk_pid87177 00:32:20.756 Removing: /var/run/dpdk/spdk_pid87286 00:32:20.756 Removing: /var/run/dpdk/spdk_pid87948 00:32:20.756 Removing: /var/run/dpdk/spdk_pid87983 00:32:20.756 Removing: /var/run/dpdk/spdk_pid88018 00:32:20.756 Removing: /var/run/dpdk/spdk_pid88266 00:32:20.756 Removing: /var/run/dpdk/spdk_pid88297 00:32:21.016 Removing: /var/run/dpdk/spdk_pid88332 00:32:21.016 Clean 00:32:21.016 killing process with pid 60107 00:32:21.016 killing process with pid 60108 00:32:21.016 21:45:41 -- common/autotest_common.sh@1436 -- # return 0 00:32:21.016 21:45:41 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:21.016 21:45:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:21.016 21:45:41 -- common/autotest_common.sh@10 -- # set +x 00:32:21.016 21:45:41 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:21.016 21:45:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:21.016 21:45:41 -- common/autotest_common.sh@10 -- # set +x 00:32:21.016 21:45:41 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:21.016 21:45:41 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:21.016 21:45:41 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:21.016 21:45:41 -- spdk/autotest.sh@394 -- # hash lcov 00:32:21.016 21:45:41 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:21.016 21:45:41 -- spdk/autotest.sh@396 -- # hostname 00:32:21.016 21:45:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:21.274 geninfo: WARNING: invalid characters removed from testname! 00:32:47.798 21:46:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:51.082 21:46:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:53.624 21:46:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:56.150 21:46:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:59.427 21:46:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:01.324 21:46:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.854 21:46:24 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:03.854 21:46:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:03.854 21:46:24 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:03.854 21:46:24 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.854 21:46:24 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.854 21:46:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.854 21:46:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.854 21:46:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.854 21:46:24 -- paths/export.sh@5 -- $ export PATH 00:33:03.854 21:46:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.854 21:46:24 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:03.854 21:46:24 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:03.855 21:46:24 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720734384.XXXXXX 00:33:03.855 21:46:24 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720734384.E9iiWj 00:33:03.855 21:46:24 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:03.855 21:46:24 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:33:03.855 21:46:24 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:33:03.855 21:46:24 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:33:03.855 21:46:24 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:03.855 21:46:24 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:03.855 21:46:24 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:03.855 21:46:24 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:03.855 21:46:24 -- common/autotest_common.sh@10 -- $ set +x 00:33:03.855 21:46:24 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:33:03.855 21:46:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:03.855 21:46:24 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:03.855 21:46:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:03.855 21:46:24 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:03.855 21:46:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:03.855 21:46:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:03.855 21:46:24 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:03.855 21:46:24 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:03.855 21:46:24 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:04.136 21:46:24 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:04.136 + [[ -n 5984 ]] 00:33:04.136 + sudo kill 5984 00:33:04.147 [Pipeline] } 00:33:04.168 [Pipeline] // timeout 00:33:04.174 [Pipeline] } 00:33:04.196 [Pipeline] // stage 00:33:04.201 [Pipeline] } 00:33:04.220 [Pipeline] // catchError 00:33:04.231 [Pipeline] stage 00:33:04.234 [Pipeline] { (Stop VM) 00:33:04.250 [Pipeline] sh 00:33:04.532 + vagrant halt 00:33:08.714 ==> default: Halting domain... 00:33:15.323 [Pipeline] sh 00:33:15.601 + vagrant destroy -f 00:33:19.784 ==> default: Removing domain... 00:33:19.798 [Pipeline] sh 00:33:20.078 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:33:20.090 [Pipeline] } 00:33:20.111 [Pipeline] // stage 00:33:20.117 [Pipeline] } 00:33:20.136 [Pipeline] // dir 00:33:20.143 [Pipeline] } 00:33:20.166 [Pipeline] // wrap 00:33:20.172 [Pipeline] } 00:33:20.192 [Pipeline] // catchError 00:33:20.201 [Pipeline] stage 00:33:20.203 [Pipeline] { (Epilogue) 00:33:20.218 [Pipeline] sh 00:33:20.498 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:27.064 [Pipeline] catchError 00:33:27.066 [Pipeline] { 00:33:27.080 [Pipeline] sh 00:33:27.359 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:27.359 Artifacts sizes are good 00:33:27.367 [Pipeline] } 00:33:27.384 [Pipeline] // catchError 00:33:27.398 [Pipeline] archiveArtifacts 00:33:27.406 Archiving artifacts 00:33:27.559 [Pipeline] cleanWs 00:33:27.570 [WS-CLEANUP] Deleting project workspace... 00:33:27.570 [WS-CLEANUP] Deferred wipeout is used... 00:33:27.576 [WS-CLEANUP] done 00:33:27.578 [Pipeline] } 00:33:27.599 [Pipeline] // stage 00:33:27.604 [Pipeline] } 00:33:27.623 [Pipeline] // node 00:33:27.629 [Pipeline] End of Pipeline 00:33:27.654 Finished: SUCCESS